Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

You are viewing a single thread.
View all comments

For fucks sake it’s just an algorithm. It’s not capable of becoming sentient.

Have I lost it or has everyone become an idiot?

permalink
report
reply
30 points

I don’t know where everyone is getting these in depth understandings of how and when sentience arises. To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don’t believe in a soul, or that organic matter has special properties that allows sentience to arise.

I could maybe get behind the idea that LLMs can’t be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.

Even if we find the limit to LLMs and figure out that sentience can’t arise (I don’t know how this would be proven, but let’s say it was), you’d still somehow have to prove that algorithms can’t produce sentience, and that only the magical fairy dust in our souls produce sentience.

That’s not something that I’ve bought into yet.

permalink
report
parent
reply

so i know a lot of other users will just be dismissive but i like to hone my critical thinking skills, and most people are completely unfamiliar with these advanced concepts, so here’s my philosophical examination of the issue.

the thing is, we don’t even know how to prove HUMANS are sentient except by self-reports of our internal subjective experiences.

so sentience/consciousness as i discuss it here refers primarily to Qualia, or to a being existing in such a state as to experience Qualia. Qualia are the internal, subjective, mental experiences of external, physical phenomena.

here’s the task of people that want to prove that the human brain is a meat computer: Explain, in exact detail, how (i.e. the procsses by which) Qualia, (i.e. internal, subjective, mental experiences) arise from external, objective, physical phenomena.

hint: you can’t. the move by physicalist philosophy is simply to deny the existence of qualia, consciousness, and subjective experience altogether as ‘illusory’ - but illusory to what? an illusion necessarily has an audience, something it is fooling or decieving. this ‘something’ would be the ‘consciousness’ or ‘sentience’ or to put it in your oh so smug terms the ‘soul’ that non-physicalist philosophy might posit. this move by physicalists is therefore syntactically absurd and merely moves the goalpost from ‘what are qualia’ to ‘what are those illusory, deceitful qualia decieving’. consciousness/sentience/qualia are distinctly not information processing phenomena, they are entirely superfluous to information processing tasks. sentience/consciousness/Qualia is/are not the information processing, but internal, subjective, mental awareness and experience of some of these information processing tasks.

Consider information processing, and the kinds of information processing that our brains/minds are capable of.

What about information processing requires an internal, subjective, mental experience? Nothing at all. An information processing system could hypothetically manage all of the tasks of a human’s normal activities (moving, eating, speaking, planning, etc.) flawlessly, without having such an internal, subjective, mental experience. (this hypothetical kind of person with no internal experiences is where the term ‘philosophical zombie’ comes from) There is no reason to assume that an information processing system that contains information about itself would have to be ‘aware’ of this information in a conscious sense of having an internal, subjective, mental experience of the information, like how a calculator or computer is assumed to perform information processing without any internal subjective mental experiences of its own (independently of the human operators).

and yet, humans (and likely other kinds of life) do have these strange internal subjective mental phenomena anyway.

our science has yet to figure out how or why this is, and the usual neuroscience task of merely correlating internal experiences to external brain activity measurements will fundamentally and definitionally never be able to prove causation, even hypothetically.

so the options we are left with in terms of conclusions to draw are:

  1. all matter contains some kind of (inhuman) sentience, including computers, that can sometimes coalesce into human-like sentience when in certain configurations (animism)
  2. nothing is truly sentient whatsoever and our self reports otherwise are to be ignored and disregarded (self-denying mechanistic physicalist zen nihilism)
  3. there is something special or unique or not entirely understood about biological life (at least human life if not all life with a central nervous system) that produces sentience/consciousness/Qualia (‘soul’-ism as you might put it, but no ‘soul’ is required for this conclusion, it could just as easily be termed ‘mystery-ism’ or ‘unknown-ism’)

And personally the only option i have any disdain for is number 2, as i cannot bring myself to deny the very thing i am constantly and completely immersed inside of/identical with.

permalink
report
parent
reply

here’s the task of people that want to prove that the human brain is a meat computer: Explain, in exact detail, how (i.e. the procsses by which) Qualia, (i.e. internal, subjective, mental experiences) arise from external, objective, physical phenomena.

hint: you can’t.

Why not? I understand that we cannot, at this particular moment, explain every step of the process and how every cause translates to an effect until you have consciousness, but we can point at the results of observation and study and less complex systems we understand the workings of better and say that it’s most likely that the human brain functions in the same way, and these processes produce Qualia.

It’s not absolute proof, but there’s nothing wrong with just saying that from what we understand, this is the most likely explanation.

Unless I’m misunderstanding what you’re saying here, why is the idea that it can’t be done the takeaway rather than it will take a long time for us to be able to say whether or not it’s possible?

and the usual neuroscience task of merely correlating internal experiences to external brain activity measurements will fundamentally and definitionally never be able to prove causation, even hypothetically.

Once you believe you understand exactly what external brain activity leads to particular internal experiences, you could surely prove it experimentally by building a system where you can induce that activity and seeing if the system can report back the expected experience (though this might not be possible to do ethically).

As a final point, surely your own argument above about an illusion requiring an observer rules out concluding anything along the lines of point 2?

permalink
report
parent
reply

God damn what a good post

permalink
report
parent
reply

there is something special or unique or not entirely understood about biological life (at least human life if not all life with a central nervous system) that produces sentience/consciousness/Qualia (‘soul’-ism as you might put it, but no ‘soul’ is required for this conclusion, it could just as easily be termed ‘mystery-ism’ or ‘unknown-ism’)

This is just wrong lol, there’s nothing magical about vertebrates in comparison to unicellular organisms. Maybe the depth of our emotions might be bigger, but obviously a paramecium also feels fear and happiness and anticipation, because these are necessary for it to eat and reproduce, it wouldn’t do these things if they didn’t feel good

The discrete dividing line is life and non-life (don’t @ me about viruses)

permalink
report
parent
reply
3 points
*

It seems by your periodically hostile comments (“oh so smug terms the ‘soul’”) indicates that you have a disdain for my position, so I assume you think my position is your option 2, but I don’t ignore self-reports of sentience. I’m closer to option 1, I see it as plausible that a sufficiently general algorithm could have the same level of sentience as humans.

The third position strikes me as at least just as ridiculous as the second. Of course we don’t totally understand biological life, but just saying there’s something “special” is wild. We’re a configuration of non-sentient parts that produce sentience. Computers are also a configuration of non-sentient parts. To claim that there’s no configuration of silicon that could arrive at sentience but that there is a configuration of carbon that could arrive at sentience is imbuing carbon with some properties that seems vastly more complex than the physical reality of carbon would allow.

permalink
report
parent
reply
2 points

permalink
report
parent
reply

I’m no philosopher, but at lot of these questions seem very epistemological and not much different from religious ones (i.e. so what changes if we determine that life is a simulation). Like they’re definitely fun questions, but I just don’t see how they’ll be answered with how much is unknown. We’re talking “how did we get here” type stuff

I’m not so much concerned with that aspect as I am about the fact that it’s a powerful technology that will be used to oppress

permalink
report
parent
reply

I think it would be far less confusing to call them algorithmic statistical models rather than AI

permalink
report
parent
reply
12 points
*

Actually, yeah, you’re on it. These questions are epistemological. They’re also phenomenological. Testing AI is all about seeing how it responds and reacts just as much as they are about being. It’s silly. When it comes to AI right now, existing is measured by reaction to see if it’s imitating a human intelligence. I’m pretty sure “I react therefore I am” was never coined by any great, old philosopher. So, what can we learn from your observation? Nobody knows anything. Or at least, the supposed geniuses who make AI and test it believe that reaction measures intelligence.

permalink
report
parent
reply
4 points

Yeah, capitalists will use unreliable tech to replace workers. Even if GPT4 is the end all (there’s no indication that it is), that would still displace tons of workers and just result in both worse products for everyone and a worse, more competitive labor market.

permalink
report
parent
reply

I don’t know where everyone is getting these in depth understandings of how and when sentience arises.

It’s exactly the fact that we don’t how sentience forms that makes the acting like fucking chatgpt is now on the brink of developing it so ludicrous. Neuroscientists don’t even know how it works, so why are these AI hypemen so sure they got it figured out?

The only logical answer is that they don’t and it’s 100% marketing.

Hoping computer algorithms made in a way that’s meant to superficially mimic neural connections will somehow become capable of thinking on its own if they just become powerful enough is a complete shot in the dark.

permalink
report
parent
reply
5 points
*

The philosophy of this question is interesting, but if GPT5 is capable of performing all intelligence-related tasks at an entry level for all jobs, it would not only wipe out a large chunk of the job market, but also stop people from getting to senior positions because the entry level positions would be filled by GPT.

Capitalists don’t have 5-10 years of forethought to see how this would collapse society. Even if GPT5 isn’t “thinking”, it’s actually its capabilities that’ll make a material difference. Even if it never gets to the point of advanced human thought, it’s already spitting out a bunch of unreliable information. Make it slightly more reliable and it’ll be on par with entry-level humans in most fields.

So I think dismissing it as “just marketing” is too reductive. Even if you think it doesn’t deserve rights because it’s not sentient, it’ll still fundamentally change society.

permalink
report
parent
reply
4 points

The problem I have with this posture is that it dismisses AI as unimportant, simply because we don’t know what we mean when we say we might accidentally make it ‘sentient’ or whatever the fuck.

Seems like the only reason anyone is interested in the question of AI sentience is to determine how we should regard it in relation to ourselves, as if we’ve learned absolutely nothing from several millennia of bigotry and exceptionalism. Shit’s different.

Who the fuck cares if AI is sentient, it can be revolutionary or existential or entirely overrated independent of whether it has feelings or not.

permalink
report
parent
reply
20 points

To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don’t believe in a soul, or that organic matter has special properties that allows sentience to arise.

this is the popular sentiment with programmers and spectators right now, but even taking all those assumptions as true, it still doesn’t mean we are close to anything.

Consider the complexity of sentient, multicellular organism. That’s trillions of cells all interacting with each-other and the environment concurrently. Even if you reduce that down to just the processes with a brain, that’s still more things happening in and between those neurons than anything we could realistically model in a programme. Programmers like to reduce that complexity down by only looking at the synaptic connections between neurons, and ignoring the everything else the cells are doing.

permalink
report
parent
reply

You’re making a lot of assumptions about the human mind there.

permalink
report
parent
reply
8 points
*

What assumptions? I was careful to almost universally take a negative stance not a positive one. The only exception I see is my stance against the existence of the soul. Otherwise there are no assumptions, let alone ones specific to the mind.

permalink
report
parent
reply
16 points

I could maybe get behind the idea that LLMs can’t be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.

Any algorithm, by definition, has a finite number of specific steps and is made to solve some category of related problems. While humans certainly use algorithms to accomplish tasks sometimes, I don’t think something as general as consciousness can be accurately called an algorithm.

permalink
report
parent
reply

Every human experience is necessarily finite and made up of steps, insofar as you can break down the experience of your mind into discrete thoughts.

permalink
report
parent
reply
3 points

It seems you’re both implying here that consciousness is necessarily non-algorithmic because it’s non-finite, but then also admitting in another comment that all human experience is finite, which would necessarily include consciousness.

I don’t get what your point is here. Is all human experience finite? Are some parts of human experience “non-categorical”? I think you need to clarify here.

permalink
report
parent
reply

Well, my (admittedly postgrad) work with biology gives me the impression that the brain has a lot more parts to consider than just a language-trained machine. Hell, most living creatures don’t even have language.

It just screams of a marketing scam. I’m not against the idea of AI. Although from an ethical standpoint I question bringing life into this world for the purpose of using it like a tool. You know, slavery. But I don’t think this is what they’re doing. I think they’re just trying to sell the next Google AdSense

permalink
report
parent
reply
2 points
*

Notice the distinction in my comments between an LLM and other algorithms, that’s a key point that you’re ignoring. The idea that other commenters have is that for some reason there is no input that could produce the output of human thought other than the magical fairy dust that exists within our souls. I don’t believe this. I think a sufficiently advanced input could arrive at the holistic output of human thought. This doesn’t have to be LLMs.

permalink
report
parent
reply

That’s an unfalsifiable belief. “We don’t know how sentience works so they could be sentient” is easily reversed because it’s based entirely on the fact that we can’t technically disprove or prove it.

permalink
report
parent
reply
3 points

There’s a distinction between unfalsifiable and currently unknown. If we did someday know how sentience worked, my stance would be falsifiable. Currently it’s not, and it’s fine to admit we don’t know. You don’t need to take a stance when you lack information.

permalink
report
parent
reply

To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience.

How is that plausible? The human brain has more processing power than a snake’s. Which has more power than a bacterium’s (equivalent of a) brain. Those two things are still experiencing consciousness/sentience. Bacteria will look out for their own interests, will chatGPT do that? No, chatGPT is a perfect slave, just like every computer program ever written

chatGPT : freshman-year-“hello world”-program
human being : amoeba
(the : symbol means it’s being analogized to something)

a human is a sentience made up of trillions of unicellular consciousnesses.
chatGPT is a program made up of trillions of data points. But they’re still just data points, which have no sentience or consciousness.

Both are something much greater than the sum of their parts, but in a human’s case, those parts were sentient/conscious to begin with. Amoebas will reproduce and kill and eat just like us, our lung cells and nephrons and etc are basically little tiny specialized amoebas. ChatGPT doesn’t…do anything, it has no will

permalink
report
parent
reply
17 points

Have I lost it

Well no, owls are smart. But yes, in terms of idiocy, very few go lower than “Silicon Valley techbro”

permalink
report
parent
reply

Have I lost it

No you haven’t. I feel the same way though, since the world has gone mad over it. Reporting on this is just another proof that journalism only exists ro make capitalists money. Anything approaching the lib idea of a “free and independent press” would start every article explaining that none of this is AI, it is not capable of achieving consciousness, and theyvare only saying this to create hype

permalink
report
parent
reply

Have I lost it or has everyone become an idiot?

Brainworms has been amplified and promoted by social media, I don’t think you have lost it. This is just the shitty capitalist world we live in.

permalink
report
parent
reply

the_dunk_tank

!the_dunk_tank@hexbear.net

Create post

It’s the dunk tank.

This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.

Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.

Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.

Rule 3: No sectarianism.

Rule 4: TERF/SWERFs Not Welcome

Rule 5: No ableism of any kind (that includes stuff like libt*rd)

Rule 6: Do not post fellow hexbears.

Rule 7: Do not individually target other instances’ admins or moderators.

Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to !shitreactionariessay@lemmygrad.ml

Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again

Community stats

  • 1.7K

    Monthly active users

  • 4.5K

    Posts

  • 94K

    Comments