“Our primary conclusion across all scenarios is that without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease,” they added. “We term this condition Model Autophagy Disorder (MAD).”

Interestingly, this might be a more challenging problem as we increase the use of generative AI models online.

45 points

Note that humans do not exhibit this property when trained on other humans, so this would seem to prove that “AI” isn’t actually intelligent.

permalink
report
reply
49 points

Almost as if current models are fancy token predictors with no reasoning about the input

permalink
report
parent
reply
12 points

do we even need to prove this? Like anyone study a bit how generative AI works know it’s not intelligent.

permalink
report
parent
reply
6 points

There’s enough arguments about that even among highly intelligent people.

permalink
report
parent
reply
6 points

Wasn’t the echo chambers during the covid pandemic kind of proof that humans DO exhibit the same property? A good amount will start repeating stuff about nanoparticles and some black lint in a mask are worms that will control your brain?

permalink
report
parent
reply
0 points

That only happened to some humans. Something must be seriously wrong with them.

permalink
report
parent
reply
3 points

Are we sure they were humans? Maybe they were ChatGPT 2.

permalink
report
parent
reply
5 points
*

I don’t think LLM’s are intelligent, but “does it work the same as humans” is a really bad way to judge something’s intelligence

permalink
report
parent
reply
14 points

Even if we look at other animals, when they learn by observing other members of their own species, they get more competent rather than less. So AIs are literally the only thing that get worse when trained on their own kind, rather than better. It’s hard to argue they’re intelligent if the answer to “does it work the same as any other lifeform that we know of?” is “no”.

permalink
report
parent
reply
2 points

Are there any animals that only learn by observing the outputs of members of their own species? Or is it a mixture of that and a whole bunch of their own experiences with the outside world?

permalink
report
parent
reply
5 points

Current AI is not actually “intelligent” and, as far as I know, not even their creators directly describe them as that. The programs and models existing at the moment aren’t capable of abstract thinking oder reasoning and other processes that make an intelligent being or thing intelligent.

The companies involved are certainly eager to create something like a general intelligence. But even when they reach that goal, we don’t know yet if such an AGI would even be truly intelligent.

permalink
report
parent
reply
2 points

Humans are not entirely trained on other humans, though. We learn plenty of stuff from our environment and experiences. Note this very important part of the primary conclusion:

without enough fresh real data in each generation

permalink
report
parent
reply
1 point

Math for example is something one could argue is purely taught by humans.

permalink
report
parent
reply
3 points
*

Dogs can do math and I’m quite sure I’ve never taught my dog that deliberately.

Even for humans learning it, I would expect that most of our understanding of math comes from everyday usage of it rather than explicit rote training.

permalink
report
parent
reply
0 points

Key point here being that humans train on other humans, not on themselves. They are also always exposed to the real world.

If you lock a human in a box and only let them interact with themselves they go a bit funny in the head very quickly.

permalink
report
parent
reply
5 points

The reason is different from what is happening with AI, though. Sensory deprivation or extreme isolation and the Ganzfeld effect lead to hallucinations because our brain seems to have to constantly react to stimuli in order to keep functioning. Our brain starts creating things from imagination.

With AI it is the other way around. They lose information when presented with the same data again and again because their statistical models look for probabilities.

permalink
report
parent
reply
23 points
*

If you let the AI feed on its own bullshit long enough it will eventually vote for Donald Trump

permalink
report
reply
17 points

AI incest at work. Just look at that Hapsburg jaw.

permalink
report
reply
2 points

Love it. AI incest is the perfect term for it haha.

permalink
report
parent
reply
11 points

Good!

Was that petty?

But, you know, good luck completely replacing human artists, musicians, writers, programmers, and everyone else who actually creates new content, if all generative AI models essentially give themselves prion diseases when they feed on each other.

permalink
report
reply
2 points
*
Deleted by creator
permalink
report
parent
reply
4 points

I absolutely agree! I’ve seen so many proponents of AI argue that AI learning from artworks scraped from the internet is no different to a human learning by looking at other artists, and while anyone who is actually an artist (or involved in any creative industry at all, including things like coding that require a creative mind) can see the difference, I’ve always struggled to coherently express why. And I think this it. Human artists benefit from other human art to look at, as it helps them improve faster, but they don’t need it in the same way, and they’re more than capable of coming up with new ideas without it. Even a brief look at art history shows plenty of examples of human artists coming up with completely new ideas, artworks that had absolutely no precedent. I really can’t imagine AI ever being able to invent, say, Cubism without having seen a human do it first.

I feel like the only people that are in favour of AI artworks are those who don’t see the value of art outside of its commercial use. They’re the same people who are, presumably, quite happy playing the same same-y games and watching same-y TV and films over and over again. AI just can’t replicate the human spark of creativity, and I really can’t see it being good for society either economically or culturally to replace artists with algorithms that can only produce derivations of what they’ve already seen.

permalink
report
parent
reply
10 points

I only have a small amount of experience with generating images using AI models, but I have found this to be true. It’s like making a photocopy of a photocopy. The results can be unintentionally hilarious though.

permalink
report
reply

Technology

!technology@beehaw.org

Create post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

Community stats

  • 2.7K

    Monthly active users

  • 3.4K

    Posts

  • 82K

    Comments