Literally just mainlining marketing material straight into whatever’s left of their rotting brains.
You’re right that it isn’t, though considering science have huge problems even defining sentience, it’s pretty moot point right now. At least until it start to dream about electric sheep or something.
So you can’t name a specific task that bots can’t do?
reproduce without consensual assistance
move
You asked how chatgpt is not AI.
Chatgpt is not AI because it is not sentient. It is not sentient because it is a search engine, it was not made to be sentient.
Of course machines could theoretically, in the far future, become sentient. But LLMs will never become sentient.
the thing is, we used to know this. 15 years ago, the prevailing belief was that AI would be built by combining multiple subsystems together - an LLM, visual processing, a planning and decision making hub, etc… we know the brain works like this - idk where it all got lost. profit, probably.
name a specific task that bots can’t do
Self-actualize.
In a strict sense yes, humans do Things based on if > then stimuli. But we self assign ourselves these Things to do, and chat bots/LLMs can’t. They will always need a prompt, even if they could become advanced enough to continue iterating on that prompt on its own.
I can pick up a pencil and doodle something out of an unquantifiable desire to make something. Midjourney or whatever the fuck can create art, but only because someone else asks it to and tells it what to make. Even if we created a generative art bot that was designed to randomly spit out a drawing every hour without prompts, that’s still an outside prompt - without programming the AI to do this, it wouldn’t do it.
Our desires are driven by inner self-actualization that can be affected by outside stimuli. An AI cannot act without us pushing it to, and never could, because even a hypothetical fully sentient AI started as a program.
Oh that’s easy. There are plenty of complex integrals or even statistics problems that computers still can’t do properly because the steps for proper transformation are unintuitive or contradictory with steps used with simpler integrals and problems.
You will literally run into them if you take a simple Calculus 2 or Stats 2 class, you’ll see it on chegg all the time that someone trying to rack up answers for a resume using chatGPT will fuck up the answers. For many of these integrals, their answers are instead hard-programmed into the calculator like Symbolab, so the only reason that the computer can ‘do it’ is because someone already did it first, it still can’t reason from first principles or extrapolate to complex theoretical scenarios.
That said, the ability to complete tasks is not indicative of sentience.