Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

You are viewing a single thread.
View all comments View context

This is a gross misrepresentation of the study.

Thatā€™s as shortsighted as the ā€œI think there is a world market for maybe five computersā€ quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented.

Thatā€™s not their argument. Theyā€™re saying that they can prove that machine learning cannot lead to AGI in the foreseeable future.

Maybe transformers arenā€™t the path to AGI, but thereā€™s no reason to think we canā€™t achieve it in general unless youā€™re religious.

Theyā€™re not talking about achieving it in general, they only claim that no known techniques can bring it about in the near future, as the AI-hype people claim. Again, they prove this.

Thatā€™s a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesnā€™t mean it has any relationship to the real world.

Thatā€™s not what they did. They provided an extremely optimistic scenario in which someone creates an AGI through known methods (e.g. they have a computer with limitless memory, they have infinite and perfect training data, they can sample without any bias, current techniques can eventually create AGI, an AGI would only have to be slightly better than random chance but not perfect, etcā€¦), and then present a computational proof that shows that this is in contradiction with other logical proofs.

Basically, if you can train an AGI through currently known methods, then you have an algorithm that can solve the Perfect-vs-Chance problem in polynomial time. Thereā€™s a technical explanation in the paper that Iā€™m not going to try and rehash since itā€™s been too long since I worked on computational proofs, but it seems to check out. But this is a contradiction, as we have proof, hard mathematical proof, that such an algorithm cannot exist and must be non-polynomial or NP-Hard. Therefore, AI-learning for an AGI must also be NP-Hard. And because every known AI learning method is tractable, it cannor possibly lead to AGI. Itā€™s not a strawman, itā€™s a hard proof of why itā€™s impossible, like proving that pi has infinite decimals or something.

Ergo, anyone who claims that AGI is around the corner either means ā€œa good AI that can demonstrate some but not all human behaviourā€ or is bullshitting. We literally could burn up the entire planet for fuel to train an AI and weā€™d still not end up with an AGI. We need some other breakthrough, e.g. significant advancements in quantum computing perhaps, to even hope at beginning work on an AGI. And again, the authors donā€™t offer a thought experiment, they provide a computational proof for this.

permalink
report
parent
reply
0 points

Thereā€™s a number of major flaws with it:

  1. Assume the paper is completely true. Itā€™s just proved the algorithmic complexity of it, but so what? What if the general case is NP-hard, but not in the case that we care about? Thatā€™s been true for other problems, why not this one?
  2. It proves something in a model. So what? Prove that the result applies to the real world
  3. Replace ā€œhuman-likeā€ with something trivial like ā€œtree-likeā€. The paper then proves that weā€™ll never achieve tree-like intelligence?

IMO thereā€™s also flaws in the argument itself, but those are more relevant

permalink
report
parent
reply
3 points
*

Hey! Just asking you because Iā€™m not sure where else to direct this energy at the moment.

I spent a while trying to understand the argument this paper was making, and for the most part I think Iā€™ve got it. But thereā€™s a kind of obvious, knee-jerk rebuttal to throw at it, seen elsewhere under this post, even:

If producing an AGI is intractable, why does the human meat-brain exist?

Evolution ā€œmay be thought ofā€ as a process that samples a distribution of situation-behaviors, though that distribution is entirely abstract. And the decision process for whether the ā€œAIā€ it produces matches this distribution of successful behaviors is yada yada darwinism. The answer we care about, because this is the inspiration I imagine AI engineers took from evolution in the first place, is whether evolution can (not inevitably, just can) produce an AGI (us) in reasonable time (it did).

The question is, where does this line of thinking fail?

Going by the proof, it should either be:

  • That evolution is an intractable method. 60 million years is a long time, but it still feels quite short for this answer.
  • Something about it doesnā€™t fit within this computational paradigm. That is, Iā€™m stretching the definition.
  • The language ā€œno better than chanceā€ for option 2 is actually more significant than Iā€™m thinking. Evolution is all chance. But is our existence really just extreme luck? I know that it is, but this answer is really unsatisfying.

Iā€™m not sure how to formalize any of this, though.

The thought that we could ā€œencode all of biological evolution into a program of at most size Kā€ did made me laugh.

permalink
report
parent
reply
1 point

Thatā€™s a great line of thought. Take an algorithm of ā€œsimulate a human brainā€. Obviously that would break the paperā€™s argument, so youā€™d have to find why it doesnā€™t apply here to take the paperā€™s claims at face value.

permalink
report
parent
reply

If producing an AGI is intractable, why does the human meat-brain exist?

Ah, but here we have to get pedantic a little bit: producing an AGI through current known methods is intractable.

The human brain is extremely complex and we still donā€™t fully know how it works. We donā€™t know if the way we learn is really analogous to how these AIs learn. We donā€™t really know if the way we think is analogous to how computers ā€œthinkā€.

Thereā€™s also another argument to be made, that an AGI that matches the currently agreed upon definition is impossible. And I mean that in the broadest sense, e.g. humans donā€™t fit the definition either. If thatā€™s true, then an AI could perhaps be trained in a tractable amount of time, but this would upend our understanding of human consciousness (perhaps justifyingly so). Maybe weā€™re overestimating how special we are.

And then thereā€™s the argument that you already mentioned: it is intractable, but 60 million years, spread over trillions of creatures is long enough. That also suggests that AGI is really hard, and that creating one really isnā€™t ā€œaround the cornerā€ as some enthusiasts claim. For any practical AGI weā€™d have to finish training in maybe a couple years, not millions of years.

And maybe we develop some quantum computing breakthrough that gets us where we need to be. Who knows?

permalink
report
parent
reply
2 points

Ah, but here we have to get pedantic a little bit: producing an AGI through current known methods is intractable.

I didnā€™t quite understand this at first. I think I was going to say something about the paper leaving the method ambiguous, thus implicating all methods yet unknown, etc, whatever. But yeah, this divide between solvable and ā€œunsolvableā€ shifts if we ever break NP-hard and have to define some new NP-super-hard category. This does feel like the piece I was missing. Or a piece, anyway.

e.g. humans donā€™t fit the definition either.

I did think about this, and the only reason I reject it is that ā€œhuman-like or -levelā€ matches our complexity by definition, and we already have a behavior set for a fairly large n. This doesnā€™t have to mean that we arenā€™t still below some curve, of course, but I do struggle to imagine how our own complexity wouldnā€™t still be too large to solve, AGI or not.


Anyway, the main reason Iā€™m replying again at all is just to make sure I thanked you for getting back to me, haha. This was definitely helpful.

permalink
report
parent
reply

Technology

!technology@lemmy.ml

Create post

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

Community stats

  • 4.5K

    Monthly active users

  • 2.8K

    Posts

  • 45K

    Comments

Community moderators