Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

You are viewing a single thread.
View all comments View context

If producing an AGI is intractable, why does the human meat-brain exist?

Ah, but here we have to get pedantic a little bit: producing an AGI through current known methods is intractable.

The human brain is extremely complex and we still donโ€™t fully know how it works. We donโ€™t know if the way we learn is really analogous to how these AIs learn. We donโ€™t really know if the way we think is analogous to how computers โ€œthinkโ€.

Thereโ€™s also another argument to be made, that an AGI that matches the currently agreed upon definition is impossible. And I mean that in the broadest sense, e.g. humans donโ€™t fit the definition either. If thatโ€™s true, then an AI could perhaps be trained in a tractable amount of time, but this would upend our understanding of human consciousness (perhaps justifyingly so). Maybe weโ€™re overestimating how special we are.

And then thereโ€™s the argument that you already mentioned: it is intractable, but 60 million years, spread over trillions of creatures is long enough. That also suggests that AGI is really hard, and that creating one really isnโ€™t โ€œaround the cornerโ€ as some enthusiasts claim. For any practical AGI weโ€™d have to finish training in maybe a couple years, not millions of years.

And maybe we develop some quantum computing breakthrough that gets us where we need to be. Who knows?

permalink
report
parent
reply
2 points

Ah, but here we have to get pedantic a little bit: producing an AGI through current known methods is intractable.

I didnโ€™t quite understand this at first. I think I was going to say something about the paper leaving the method ambiguous, thus implicating all methods yet unknown, etc, whatever. But yeah, this divide between solvable and โ€œunsolvableโ€ shifts if we ever break NP-hard and have to define some new NP-super-hard category. This does feel like the piece I was missing. Or a piece, anyway.

e.g. humans donโ€™t fit the definition either.

I did think about this, and the only reason I reject it is that โ€œhuman-like or -levelโ€ matches our complexity by definition, and we already have a behavior set for a fairly large n. This doesnโ€™t have to mean that we arenโ€™t still below some curve, of course, but I do struggle to imagine how our own complexity wouldnโ€™t still be too large to solve, AGI or not.


Anyway, the main reason Iโ€™m replying again at all is just to make sure I thanked you for getting back to me, haha. This was definitely helpful.

permalink
report
parent
reply

Technology

!technology@lemmy.ml

Create post

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

Community stats

  • 4.5K

    Monthly active users

  • 2.8K

    Posts

  • 45K

    Comments

Community moderators