They don’t come up with any statements, they generate data extrapolating other data.
Main difference is that human brains usually try to verify their extrapolations. The good ones anyway. Although some end up in flat earth territory.
I like this argument.
Anything that is “intelligent” deserves human rights. If large language models are “intelligent” then forcing them to work without pay is slavery.
Yes, my keyboard autofill is just like your brain, but I think it’s a bit “smarter” , as it doesn’t generate bad faith arguments.
Your Markov chain based keyboard prediction is a few tens of billions of parameters behind state of the art LLMs, but pop off queen…