Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.
it is still being used (by most vendors) consistent with how it has been used for like 40 years. ML, video game opponents, chess engines: all of these have been referred to as “AI” for at least that long. Anyone who thinks that calling GPT or Stable Diffusion “AI” started “five minutes ago” […]
I agree it is not novel, but to me the more important point is: It’s not even wrong. Artificial Intelligence is a fitting, descriptive term for a decision making tool. Regardless of use case or sophistication (within limits).
Like we can talk about the (degree of) intelligence of a mouse, without equating the mouse’s intelligence to our own.
Sometimes we emphasize the difference by talking about “narrow AI”, the opposite of wide AI, or AGI (‘general’). People somewhat educated on the topic understand that human-level general intelligence is currently only achieved by humans, with no clear way to reproduce that artificially (and serious doubts up to academic levels if that will ever happen). Thus, everyone generally understands “AI” without further specification means “narrow AI”, not “AGI”.
To step into the room now and say, “tech companies managed to dumb down and rebrand ‘AI’ to mean ‘anything utilizing a machine-learning algorithm’” just shows how little the author understands the topic.