You are viewing a single thread.
View all comments
1 point

What we know for certain is that Bing, ChatGPT, and other language models are not sentient

I wonder how we can “certainly” know that.

permalink
report
reply
1 point

Because we built the model and know how it works. If you poke a dead brain a limb might twitch, but there is no consciousness in there. That’s pretty much where we’re at.

permalink
report
parent
reply
1 point

I would argue that we also know how brains work on a physical/chemical level, but that does not mean that we understand how they work on a system level. Just like we know how NNs work on a mathematical level, but not on a system level.

When someone claims that some object does not have a certain property, I would expect them to define what the necessary conditions for this property are, and then show that these conditions are not satisfied by the object.

As far as I know, the current consensus hypothesis is that sentience/consciousness emerges from certain patterns of information processing. So, one would have to show that the necessary kind of information processing is not happening in some object. One can argue that dead brains are not conscious, as there is not information processing going on at all. However, as it is unknown what kind of information processing is necessary for consciousness to arise, you can currently not exactly define the necessary conditions (beyond “there has to be some information processing”), and therefore not show that NNs do not fulfill these conditions. So, I think it is difficult to be “certain”.

permalink
report
parent
reply
1 point

I disagree with your premise and I don’t think this is a productive discussion.

permalink
report
parent
reply

Machine Learning

!machinelearning@lemmy.ml

Create post

Community stats

  • 27

    Monthly active users

  • 184

    Posts

  • 139

    Comments

Community moderators