Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.
[…] in blog posts and videos and published memoirs, autistic teens and young adults described living for a decade or more without any way to communicate, while people around them assumed they were intellectually deficient.
On a related note… only 5% of hearing parents with a deaf child will learn sign language.
That’s awful. I don’t know why sign language isn’t made into an official state language that everyone has to learn some basic amount of proficiency
Amen! And it would benefit literally everybody. You can communicate across a room or in loud environments. It’s so useful!
also those parents could be close relatives, so they could fuck up their offspring genome by transferring recessive genes to them from both sides, and thus a recessive phenotype to be expressed, which most of the time is a disability. shouldnt expect much from such parents
Something trained only on form” — as all LLMs are, by definition — “is only going to get form; it’s not going to get meaning. It’s not going to get to understanding.”
I had lengthy and intricate conversations with ChatGPT about philosophy and religious concepts. It allowed me to playfully peek into Spinoza’s worldview, with a few errors.
I have no problem to accept it is form, but cannot deny it conveys meaning as if it understands.
The article is very opinionated and dismissive in that regard. It even goes so far that it predicts what future research and engineering cannot achieve; untrustworthy.
We cannot pin down what we even mean with intelligence and meaning. While being way too long, the article doesn’t even mention emergent capabilities, or quote any of the many contrary scientific views.
Apart from the unnecessarily long anecdotes about autistic and disabled people, did anybody learn anything from this article? I feel it’s an uncritical parroting of what people like to think anyways to feel supreme and secure.
LLMs are definitely not intelligent. If you understand how they work, you’ll realise why that is. LLMs reflect the intelligence in the work which they are trained on. No more, no less.
That very much depends on what you define as “intelligent”. We lack a clear definition.
I agree: These early generations of specific AIs are clearly not on the same level as human intelligence.
And still, we can already have more intelligent conversations with them than with most humans.
It’s not a fair comparison though. It’s as if we’d compare the language region of a toddler with a complete brain of an adult. Let’s see what the next few years bring.
I’m not making that point, just mentioning it can be made on an academic level: There’s a paper about the surprising emergent capabilities of ChatGPT 4.0, titled “Sparks of AGI”.
That might seem plausible until you read deeply into the latest cognitive science. Nowadays, the growing consensus is around “predictive coding” theory of cognition, and the idea is that human cognition also works by minimizing prediction error. We have models in our brains that reflect input that we’ve been trained on. I think anyone who understands human cognition and LLMs cannot confidently say that LLMs are or are not intelligent yet.
The ability to speak does not make you intelligent.
Here is an alternative Piped link(s): https://piped.video/watch?v=TUq6rGdfJSo
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
Ability to speak seems like an obvious sign on some kind of intelligence and complexity but I don’t remember anyone ever arguing that inability to speak means lack of intelligence. We know a plenty of intelligent species that lack the ability to speak a language as complex as humans can but we don’t consider them unintelligent because of that.
“LLMs are not intelligent at all”
Sucks to lose your job to a potenttially more competent AI then that lacks any intelligence.
Did you say “sucks to lose your job to a manufacturing robot that lacks any intelligence” to the countless people in manufacturing jobs left destitute by robotics starting in the 1960s?
I did. Change is hard but it’s hard to argue that we would be better off without that robotics. It does a lot of work for us and we can all have better stuff because of it. In a better world we’d be excited because it means we all have to do less work but the upper class just keeps finding more stuff for us to do.