Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.
This is a gross misrepresentation of the study.
Thatās as shortsighted as the āI think there is a world market for maybe five computersā quote, or the worry that NYC would be buried under mountains of horse poop before cars were invented.
Thatās not their argument. Theyāre saying that they can prove that machine learning cannot lead to AGI in the foreseeable future.
Maybe transformers arenāt the path to AGI, but thereās no reason to think we canāt achieve it in general unless youāre religious.
Theyāre not talking about achieving it in general, they only claim that no known techniques can bring it about in the near future, as the AI-hype people claim. Again, they prove this.
Thatās a silly argument. It sets up a strawman and knocks it down. Just because you create a model and prove something in it, doesnāt mean it has any relationship to the real world.
Thatās not what they did. They provided an extremely optimistic scenario in which someone creates an AGI through known methods (e.g. they have a computer with limitless memory, they have infinite and perfect training data, they can sample without any bias, current techniques can eventually create AGI, an AGI would only have to be slightly better than random chance but not perfect, etcā¦), and then present a computational proof that shows that this is in contradiction with other logical proofs.
Basically, if you can train an AGI through currently known methods, then you have an algorithm that can solve the Perfect-vs-Chance problem in polynomial time. Thereās a technical explanation in the paper that Iām not going to try and rehash since itās been too long since I worked on computational proofs, but it seems to check out. But this is a contradiction, as we have proof, hard mathematical proof, that such an algorithm cannot exist and must be non-polynomial or NP-Hard. Therefore, AI-learning for an AGI must also be NP-Hard. And because every known AI learning method is tractable, it cannor possibly lead to AGI. Itās not a strawman, itās a hard proof of why itās impossible, like proving that pi has infinite decimals or something.
Ergo, anyone who claims that AGI is around the corner either means āa good AI that can demonstrate some but not all human behaviourā or is bullshitting. We literally could burn up the entire planet for fuel to train an AI and weād still not end up with an AGI. We need some other breakthrough, e.g. significant advancements in quantum computing perhaps, to even hope at beginning work on an AGI. And again, the authors donāt offer a thought experiment, they provide a computational proof for this.
Hey! Just asking you because Iām not sure where else to direct this energy at the moment.
I spent a while trying to understand the argument this paper was making, and for the most part I think Iāve got it. But thereās a kind of obvious, knee-jerk rebuttal to throw at it, seen elsewhere under this post, even:
If producing an AGI is intractable, why does the human meat-brain exist?
Evolution āmay be thought ofā as a process that samples a distribution of situation-behaviors, though that distribution is entirely abstract. And the decision process for whether the āAIā it produces matches this distribution of successful behaviors is yada yada darwinism. The answer we care about, because this is the inspiration I imagine AI engineers took from evolution in the first place, is whether evolution can (not inevitably, just can) produce an AGI (us) in reasonable time (it did).
The question is, where does this line of thinking fail?
Going by the proof, it should either be:
- That evolution is an intractable method. 60 million years is a long time, but it still feels quite short for this answer.
- Something about it doesnāt fit within this computational paradigm. That is, Iām stretching the definition.
- The language āno better than chanceā for option 2 is actually more significant than Iām thinking. Evolution is all chance. But is our existence really just extreme luck? I know that it is, but this answer is really unsatisfying.
Iām not sure how to formalize any of this, though.
The thought that we could āencode all of biological evolution into a program of at most size Kā did made me laugh.
If producing an AGI is intractable, why does the human meat-brain exist?
Ah, but here we have to get pedantic a little bit: producing an AGI through current known methods is intractable.
The human brain is extremely complex and we still donāt fully know how it works. We donāt know if the way we learn is really analogous to how these AIs learn. We donāt really know if the way we think is analogous to how computers āthinkā.
Thereās also another argument to be made, that an AGI that matches the currently agreed upon definition is impossible. And I mean that in the broadest sense, e.g. humans donāt fit the definition either. If thatās true, then an AI could perhaps be trained in a tractable amount of time, but this would upend our understanding of human consciousness (perhaps justifyingly so). Maybe weāre overestimating how special we are.
And then thereās the argument that you already mentioned: it is intractable, but 60 million years, spread over trillions of creatures is long enough. That also suggests that AGI is really hard, and that creating one really isnāt āaround the cornerā as some enthusiasts claim. For any practical AGI weād have to finish training in maybe a couple years, not millions of years.
And maybe we develop some quantum computing breakthrough that gets us where we need to be. Who knows?
Ah, but here we have to get pedantic a little bit: producing an AGI through current known methods is intractable.
I didnāt quite understand this at first. I think I was going to say something about the paper leaving the method ambiguous, thus implicating all methods yet unknown, etc, whatever. But yeah, this divide between solvable and āunsolvableā shifts if we ever break NP-hard and have to define some new NP-super-hard category. This does feel like the piece I was missing. Or a piece, anyway.
e.g. humans donāt fit the definition either.
I did think about this, and the only reason I reject it is that āhuman-like or -levelā matches our complexity by definition, and we already have a behavior set for a fairly large n. This doesnāt have to mean that we arenāt still below some curve, of course, but I do struggle to imagine how our own complexity wouldnāt still be too large to solve, AGI or not.
Anyway, the main reason Iām replying again at all is just to make sure I thanked you for getting back to me, haha. This was definitely helpful.
Thereās a number of major flaws with it:
- Assume the paper is completely true. Itās just proved the algorithmic complexity of it, but so what? What if the general case is NP-hard, but not in the case that we care about? Thatās been true for other problems, why not this one?
- It proves something in a model. So what? Prove that the result applies to the real world
- Replace āhuman-likeā with something trivial like ātree-likeā. The paper then proves that weāll never achieve tree-like intelligence?
IMO thereās also flaws in the argument itself, but those are more relevant