54 points

It’s like that painter who kept doing self-portraits through alzheimers.

permalink
report
reply
33 points

we have to be very careful about what ends up in our training data

Don’t worry, the big tech companies took a snapshot of the internet before it was poisoned so they can easily profit from LLMs without allowing competitors into the market. That’s who “We” is right?

permalink
report
reply
19 points
*

It’s impossible for any of them to have taken a sufficient snapshot. A snapshot of all unique data on the clearnet would have probably been in the scale of hundreds to thousands of exabytes, which is (apparently) more storage than any single cloud provider.

That’s forgetting the prohibitively expensive cost to process all that data for any single model.

The reality is that, like what we’ve done to the natural world, they’re polluting and corrupting the internet without taking a sufficient snapshot — just like the natural world, everything that’s lost is lost FOREVER… all in the name of short term profit!

permalink
report
parent
reply
2 points

The retroactive enclosure of the digital commons.

permalink
report
parent
reply
28 points
*

GOOD.

This “informational incest” is present in many aspects of society and needs to be stopped (one of the worst places is in the Intelligence sector).

permalink
report
reply
14 points

Informational Incest is my least favorite IT company.

permalink
report
parent
reply
9 points
*

WHAT ARE YOU DOING STEP SYS ADMIN?

permalink
report
parent
reply
2 points

Too bad they only operate in Alabama

permalink
report
parent
reply
1 point

Damn. I just bought 200 shares of ININ.

permalink
report
parent
reply
2 points

they’ll be acquired by McKinsey soon enough

permalink
report
parent
reply
24 points
*

A few years ago, people assumed that these AIs will continue to get better every year. Seems that we are already hitting some limits, and improving the models keeps getting harder and harder. It’s like the linewidth limits we have with CPU design.

permalink
report
reply
11 points

I think that hypothesis still holds as it has always assumed training data of sufficient quality. This study is more saying that the places where we’ve traditionally harvested training data from are beginning to be polluted by low-quality training data

permalink
report
parent
reply
20 points

It’s almost like we need some kind of flag on AI-generated content to prevent it from ruining things.

permalink
report
parent
reply
1 point

If that gets implemented, it would help AI devs and common people hanging online.

permalink
report
parent
reply
2 points
*

no, not really. the improvement gets less noticeable as it approaches the limit, but I’d say the speed at which it improves is still the same. especially smaller models and context window size. there’s now models comparable to chatgpt or maybe even gpt 4.0 (I don’t remember, one or the other) with context window size of 128k tokens, that you can run on a GPU with 16gb of vram. 128k tokens is around 90k words I think. that’s more than 4 bee movie scripts. it can “comprehend” all of that at once.

permalink
report
parent
reply
2 points

No they are increasingly getting better, mostly they fit in a bigger context of other discoveries

permalink
report
parent
reply
19 points

AI like:

permalink
report
reply
3 points

that shit will pave the way for new age horror movies i swear

permalink
report
parent
reply

science

!science@lemmy.world

Create post

A community to post scientific articles, news, and civil discussion.

rule #1: be kind

<— rules currently under construction, see current pinned post.

2024-11-11

Community stats

  • 3.9K

    Monthly active users

  • 1.4K

    Posts

  • 15K

    Comments