cross-posted from: https://lemmy.ml/post/14869314
“I want to live forever in AI”
ChatGPT is not conscious, it’s just a probability language model. What it says makes no sense to it and it has no sense of anything. That might change in the future but currently it’s not.
And it doesn’t have any internal state of mind. It can’t “remember” or learn anything from experience. You need to always feed everything into the context or stop and retrain it to incorporate “experiences”. So I’d say that rules out consciousness without further systems extending it.
Also, actual brains arise from desires / needs. Brains got bigger to accommodate planning and predicting.
When a human generates text, the fundamental reason for doing so is to fulfill some desire or need. When an LLM generates text it’s because the program says to generate the next word, then the next, then the next, based on a certain probability of words appearing in a certain order.
If an LLM writes text that appears to be helpful, it’s not doing it out of a desire to be helpful. It’s doing it because it’s been trained on tons of text in which someone was being helpful, and it’s mindlessly mimicking that behaviour.
From a lecture by Roger Penrose
Wikipedia has an article and he has some videos on YouTube
https://en.m.wikipedia.org/wiki/Orchestrated_objective_reduction