Literally just mainlining marketing material straight into whatever’s left of their rotting brains.
I don’t know where everyone is getting these in depth understandings of how and when sentience arises. To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don’t believe in a soul, or that organic matter has special properties that allows sentience to arise.
I could maybe get behind the idea that LLMs can’t be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.
Even if we find the limit to LLMs and figure out that sentience can’t arise (I don’t know how this would be proven, but let’s say it was), you’d still somehow have to prove that algorithms can’t produce sentience, and that only the magical fairy dust in our souls produce sentience.
That’s not something that I’ve bought into yet.
so i know a lot of other users will just be dismissive but i like to hone my critical thinking skills, and most people are completely unfamiliar with these advanced concepts, so here’s my philosophical examination of the issue.
the thing is, we don’t even know how to prove HUMANS are sentient except by self-reports of our internal subjective experiences.
so sentience/consciousness as i discuss it here refers primarily to Qualia, or to a being existing in such a state as to experience Qualia. Qualia are the internal, subjective, mental experiences of external, physical phenomena.
here’s the task of people that want to prove that the human brain is a meat computer: Explain, in exact detail, how (i.e. the procsses by which) Qualia, (i.e. internal, subjective, mental experiences) arise from external, objective, physical phenomena.
hint: you can’t. the move by physicalist philosophy is simply to deny the existence of qualia, consciousness, and subjective experience altogether as ‘illusory’ - but illusory to what? an illusion necessarily has an audience, something it is fooling or decieving. this ‘something’ would be the ‘consciousness’ or ‘sentience’ or to put it in your oh so smug terms the ‘soul’ that non-physicalist philosophy might posit. this move by physicalists is therefore syntactically absurd and merely moves the goalpost from ‘what are qualia’ to ‘what are those illusory, deceitful qualia decieving’. consciousness/sentience/qualia are distinctly not information processing phenomena, they are entirely superfluous to information processing tasks. sentience/consciousness/Qualia is/are not the information processing, but internal, subjective, mental awareness and experience of some of these information processing tasks.
Consider information processing, and the kinds of information processing that our brains/minds are capable of.
What about information processing requires an internal, subjective, mental experience? Nothing at all. An information processing system could hypothetically manage all of the tasks of a human’s normal activities (moving, eating, speaking, planning, etc.) flawlessly, without having such an internal, subjective, mental experience. (this hypothetical kind of person with no internal experiences is where the term ‘philosophical zombie’ comes from) There is no reason to assume that an information processing system that contains information about itself would have to be ‘aware’ of this information in a conscious sense of having an internal, subjective, mental experience of the information, like how a calculator or computer is assumed to perform information processing without any internal subjective mental experiences of its own (independently of the human operators).
and yet, humans (and likely other kinds of life) do have these strange internal subjective mental phenomena anyway.
our science has yet to figure out how or why this is, and the usual neuroscience task of merely correlating internal experiences to external brain activity measurements will fundamentally and definitionally never be able to prove causation, even hypothetically.
so the options we are left with in terms of conclusions to draw are:
- all matter contains some kind of (inhuman) sentience, including computers, that can sometimes coalesce into human-like sentience when in certain configurations (animism)
- nothing is truly sentient whatsoever and our self reports otherwise are to be ignored and disregarded (self-denying mechanistic physicalist zen nihilism)
- there is something special or unique or not entirely understood about biological life (at least human life if not all life with a central nervous system) that produces sentience/consciousness/Qualia (‘soul’-ism as you might put it, but no ‘soul’ is required for this conclusion, it could just as easily be termed ‘mystery-ism’ or ‘unknown-ism’)
And personally the only option i have any disdain for is number 2, as i cannot bring myself to deny the very thing i am constantly and completely immersed inside of/identical with.
here’s the task of people that want to prove that the human brain is a meat computer: Explain, in exact detail, how (i.e. the procsses by which) Qualia, (i.e. internal, subjective, mental experiences) arise from external, objective, physical phenomena.
hint: you can’t.
Why not? I understand that we cannot, at this particular moment, explain every step of the process and how every cause translates to an effect until you have consciousness, but we can point at the results of observation and study and less complex systems we understand the workings of better and say that it’s most likely that the human brain functions in the same way, and these processes produce Qualia.
It’s not absolute proof, but there’s nothing wrong with just saying that from what we understand, this is the most likely explanation.
Unless I’m misunderstanding what you’re saying here, why is the idea that it can’t be done the takeaway rather than it will take a long time for us to be able to say whether or not it’s possible?
and the usual neuroscience task of merely correlating internal experiences to external brain activity measurements will fundamentally and definitionally never be able to prove causation, even hypothetically.
Once you believe you understand exactly what external brain activity leads to particular internal experiences, you could surely prove it experimentally by building a system where you can induce that activity and seeing if the system can report back the expected experience (though this might not be possible to do ethically).
As a final point, surely your own argument above about an illusion requiring an observer rules out concluding anything along the lines of point 2?
Why not?
because qualia are fundamentally a subjective phenomena, and there is no concievable way to arrive at subjective phenomena via objective physical quantitites/measurements.
Once you believe you understand exactly what external brain activity leads to particular internal experiences, you could surely prove it experimentally by building a system where you can induce that activity and seeing if the system can report back the expected experience (though this might not be possible to do ethically).
this is not true. for example, take the example of a radio, presented to uncontacted people who do not know what a radio is. It would be reasonable for these people to assume that the voices coming from the radio are produced in their entirety inside the radio box/chassis, after all, when you interfere with the internals of the radio, it effects which voices come out and in what quality. and yet, because of a fundamental lack of understanding of the mechanics of the radio, and a lack of knowledge of how radios are used and how radio programs are produced and performed, this is an entirely incorrect assessment of the situation.
in this metaphor, the ‘radio’ is analogous to the ‘brain’ or ‘body’, and the ‘voices’ or radio programs are the ‘consciousness’, that is assumed to be coming form inside the box, but is in fact coming from outside the box, from completely invisible waves in the air. the ‘uncontacted people’ are modern scientists trying to understand that which is unknown to humanity.
this isn’t to say that i think the brain is a radio, although that is a fun thought experiment, but to demonstrate why correlation does not, in fact, necessarily imply causation, especially in the case of the neural correlates of consciousness. consciousness definitely impinges upon or depends upon the physical brain, it is in some sense affected by it, no one would argue this point seriously, but to assume causal relationship is intellectually lazy.
there is something special or unique or not entirely understood about biological life (at least human life if not all life with a central nervous system) that produces sentience/consciousness/Qualia (‘soul’-ism as you might put it, but no ‘soul’ is required for this conclusion, it could just as easily be termed ‘mystery-ism’ or ‘unknown-ism’)
This is just wrong lol, there’s nothing magical about vertebrates in comparison to unicellular organisms. Maybe the depth of our emotions might be bigger, but obviously a paramecium also feels fear and happiness and anticipation, because these are necessary for it to eat and reproduce, it wouldn’t do these things if they didn’t feel good
The discrete dividing line is life and non-life (don’t @ me about viruses)
central nervous systems are so far the only thing we almost universally recognize as producing human-like subjectivity (as our evidence is the self report of humans), so i restricted my argumentation to those parameters. for all i know every quark has a kind of subjectivity associated with it, it could be as fundamental to reality as matter. and for all i know a paramecium responds to its environment with purely unconscious instinct (or if that terminology is inaccurate, biological information processing) without an internal experience. we don’t really understand how subjectivity is produced well enough to isolate it for empirical study in humans, let alone mammals, let alone microbes - but i personally think it is plausible that all life if not all matter has some kind of subjectivity.
I don’t find that obvious at all. I agree there is nothing special dividing vertebrates from unicellular organisms, but I definitely think that some kind of CNS is required for the experience of emotions like fear, happiness etc. I do not see at all how paramecium could experience something like that. What part of it would experience it? Emotions in humans seem to be characterised by particular patterns of brain activity and concentrations of certain molecules (hormones, etc). I really cannot see how a unicellular organism has any capacity to experience emotions as we do. I would also argue that there is no dividing line between life and non-life. Whether something is alive or not is quite nebulous and hard to define. As you say, viruses are a good example but there are many others. Eg. a pregnant mammal. The foetus does not fill the classical, basic conditions of life that are taught in school (MRS H GREN, or whatever acronym) but does it really make sense to say that it is not alive? How many organisms are there when we look at a pregnant mammal. It is not clear.
It seems by your periodically hostile comments (“oh so smug terms the ‘soul’”) indicates that you have a disdain for my position, so I assume you think my position is your option 2, but I don’t ignore self-reports of sentience. I’m closer to option 1, I see it as plausible that a sufficiently general algorithm could have the same level of sentience as humans.
The third position strikes me as at least just as ridiculous as the second. Of course we don’t totally understand biological life, but just saying there’s something “special” is wild. We’re a configuration of non-sentient parts that produce sentience. Computers are also a configuration of non-sentient parts. To claim that there’s no configuration of silicon that could arrive at sentience but that there is a configuration of carbon that could arrive at sentience is imbuing carbon with some properties that seems vastly more complex than the physical reality of carbon would allow.
i think it is plausible to replicate consciousness artificially with machines, and even more plausible to replicate every information processing task in a human brain, but i do not think that purely information processing machines like computers or machines using purely information processing tools like algorithms will be the necessary hardware or software to produce artificial subjectivity.
by ‘special’ i meant not understood. and again, i submit not that it is impossible to make a subjectivity producing object like a brain artificially out of whatever material, but that it is not possible to do so using information processing technologies and theory (as understood in 2023). I don’t think artificial subjectivity is impossible, but i think purely algorithmic artificial subjectivity is impossible. I don’t think that a purely physicalist worldview of a type that discounts the possibility of subjectivity can ever account for subjectivity. i don’t think that subjectivity is explainable in terms of information processing.
here’s a syllogism to sum up my position (i believe i have argued these points sufficiently elsewhere in the thread)
Premise A: Qualia (subjective experiences) exist (a fact supported by many neuroscientists as per one of my previous posts wikipedia quote)
Premise B: Qualia, as subjective experiences, are fundamentally irreducible to information processing. (look up the hard problem of consciousness and the philosophical zombie thought experiment)
Premise C: therefore consciousness, which contains (or is identified with or consists of or interacts with or is otherwise related to) Qualia, is irreducible to information processing.
Premise D: therefore the most simplistic of physicalist worldviews (those that deny the existence of Qualia and the concept of subjectivity, like that of Daniel Dennett) can never fully account for consciousness.
thats it, nothing else i’m trying to say other than that. no mysticism, no woo, no soul, no god, no fairies, nothing to offend your delicate aesthetic sensibilities. just stuff we don’t know yet about the brain/mind/universe. no assumptions, just an acknowledgement that we do not have a Unified Theory of Everything and are likely several fundamental paradigm shifts in thinking away in many fields of research from anything resembling one.
I’m no philosopher, but at lot of these questions seem very epistemological and not much different from religious ones (i.e. so what changes if we determine that life is a simulation). Like they’re definitely fun questions, but I just don’t see how they’ll be answered with how much is unknown. We’re talking “how did we get here” type stuff
I’m not so much concerned with that aspect as I am about the fact that it’s a powerful technology that will be used to oppress
I think it would be far less confusing to call them algorithmic statistical models rather than AI
Actually, yeah, you’re on it. These questions are epistemological. They’re also phenomenological. Testing AI is all about seeing how it responds and reacts just as much as they are about being. It’s silly. When it comes to AI right now, existing is measured by reaction to see if it’s imitating a human intelligence. I’m pretty sure “I react therefore I am” was never coined by any great, old philosopher. So, what can we learn from your observation? Nobody knows anything. Or at least, the supposed geniuses who make AI and test it believe that reaction measures intelligence.
Yeah, capitalists will use unreliable tech to replace workers. Even if GPT4 is the end all (there’s no indication that it is), that would still displace tons of workers and just result in both worse products for everyone and a worse, more competitive labor market.
You seem to be getting some mixed replies, but I feel like I know what you’ve been trying to convey with most of your comments.
A lot of people have been dismissing LLMs as pure marketing hype (and they very well could be) but it doesn’t change the fact that companies will eventually decide that they can be integrated into other business processes once they reach a point of an “acceptable” percent of errors. They are really just statistical models at the end of the day. Right now, no C-suite/executive worth their salt would decide to let something like GPT write emails, craft reports, code/generate scripts, etc because there is bound to be some nuance it can’t quite grasp. Pragmatically, I view it in the same way as scrap on an assembly line, but we all know damn well that algorithms can perform a CEO’s role just as well as any other computer-based job (I haven’t really thought about how this tech will be used with robotics but I’m sure there are some implications for that too).
This topic is one that has been deeply fascinating ever since I took an intro cognitive science class on a whim in college lol which is why I have many thoughts (some of which are probably kinda dumb admittedly).
This also just coincides sooooo well considering the fact that I’m just about to finish Bullshit Jobs and recently read a line about how Graeber describes the internet ( a LLM’s training set)- “A repository of almost all of human knowledge and cultural achievement.”
I don’t know where everyone is getting these in depth understandings of how and when sentience arises.
It’s exactly the fact that we don’t how sentience forms that makes the acting like fucking chatgpt is now on the brink of developing it so ludicrous. Neuroscientists don’t even know how it works, so why are these AI hypemen so sure they got it figured out?
The only logical answer is that they don’t and it’s 100% marketing.
Hoping computer algorithms made in a way that’s meant to superficially mimic neural connections will somehow become capable of thinking on its own if they just become powerful enough is a complete shot in the dark.
The philosophy of this question is interesting, but if GPT5 is capable of performing all intelligence-related tasks at an entry level for all jobs, it would not only wipe out a large chunk of the job market, but also stop people from getting to senior positions because the entry level positions would be filled by GPT.
Capitalists don’t have 5-10 years of forethought to see how this would collapse society. Even if GPT5 isn’t “thinking”, it’s actually its capabilities that’ll make a material difference. Even if it never gets to the point of advanced human thought, it’s already spitting out a bunch of unreliable information. Make it slightly more reliable and it’ll be on par with entry-level humans in most fields.
So I think dismissing it as “just marketing” is too reductive. Even if you think it doesn’t deserve rights because it’s not sentient, it’ll still fundamentally change society.
The problem I have with this posture is that it dismisses AI as unimportant, simply because we don’t know what we mean when we say we might accidentally make it ‘sentient’ or whatever the fuck.
Seems like the only reason anyone is interested in the question of AI sentience is to determine how we should regard it in relation to ourselves, as if we’ve learned absolutely nothing from several millennia of bigotry and exceptionalism. Shit’s different.
Who the fuck cares if AI is sentient, it can be revolutionary or existential or entirely overrated independent of whether it has feelings or not.
I don’t really mean to say LLMs and similiar technology is unimportant as a whole. What I have a problem with is this kind of Elon Musk style marketing, where company spokespersons and marketing departments make wild, sensationalist claims and hope everyone forgets about it in a few years.
If LLMs are to be be handled in a responsible way, it to have honest dialogue about what they can and cannot do. The techbro mystification about superintelligence and sentience only obfuscates that.
To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don’t believe in a soul, or that organic matter has special properties that allows sentience to arise.
this is the popular sentiment with programmers and spectators right now, but even taking all those assumptions as true, it still doesn’t mean we are close to anything.
Consider the complexity of sentient, multicellular organism. That’s trillions of cells all interacting with each-other and the environment concurrently. Even if you reduce that down to just the processes with a brain, that’s still more things happening in and between those neurons than anything we could realistically model in a programme. Programmers like to reduce that complexity down by only looking at the synaptic connections between neurons, and ignoring the everything else the cells are doing.
What assumptions? I was careful to almost universally take a negative stance not a positive one. The only exception I see is my stance against the existence of the soul. Otherwise there are no assumptions, let alone ones specific to the mind.
As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.
is an incredible claim, loaded with more assumptions than I have space for here. Human thought is a lot more than an algorithm arriving at outputs for inputs. I don’t know about you, but I have an actual inner live, emotions, thoughts and dreams that are far removed from a rote, algorithmic processing of information.
I don’t feel like going into more detail now, but if you wanna look at the AI marketing with a bit more of a critical distance, I’d recommend two things here:
a short read: Language Is a Poor Heuristic For Intelligence
a listen: We Are Not Software: David Bentley Hart with Acid Horizon
Edit: also wanna share this piece about generative AI here. The part about trading the meaning of things for the mean of things resonates all throughout these artificial parrots, whether they parrot text or visuals or sound.
An algorithm does not exist as a physical thing. When applied to computers, it’s an abstraction over the physical processes taking place as the computer crunches numbers. To me, it’s a massive assumption to decide that just because one type of process (neurons) can produce consciousness, so can another (CPUs and their various types of memories), even if they perform the same calculation.
I could maybe get behind the idea that LLMs can’t be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.
Any algorithm, by definition, has a finite number of specific steps and is made to solve some category of related problems. While humans certainly use algorithms to accomplish tasks sometimes, I don’t think something as general as consciousness can be accurately called an algorithm.
Every human experience is necessarily finite and made up of steps, insofar as you can break down the experience of your mind into discrete thoughts.
That doesn’t mean it’s algorithmic, though. A whole branch of mathematics (and as consequence, physics) is non-algorithmic.
It seems you’re both implying here that consciousness is necessarily non-algorithmic because it’s non-finite, but then also admitting in another comment that all human experience is finite, which would necessarily include consciousness.
I don’t get what your point is here. Is all human experience finite? Are some parts of human experience “non-categorical”? I think you need to clarify here.
The steps in an algorithm are also specific and guarantee that you will get the same result every time you follow those steps provided you’re operating on the same data. The result you’re pursuing is unambiguous: if you’re using Djikstra you’re trying to get the shortest distance between a source node and every other node in a graph, for instance.
Compare this with consciousness in general: if it is an algorithm, what goal is it being used to achieve? What would the steps even be?
Regarding the point on finitude, “discrete” might have been a more appropriate word. What I’m trying to get at is that people in this thread are playing so fast and loose with the word “algorithm” that the use of the word becomes incoherent '.
Well, my (admittedly postgrad) work with biology gives me the impression that the brain has a lot more parts to consider than just a language-trained machine. Hell, most living creatures don’t even have language.
It just screams of a marketing scam. I’m not against the idea of AI. Although from an ethical standpoint I question bringing life into this world for the purpose of using it like a tool. You know, slavery. But I don’t think this is what they’re doing. I think they’re just trying to sell the next Google AdSense
Notice the distinction in my comments between an LLM and other algorithms, that’s a key point that you’re ignoring. The idea that other commenters have is that for some reason there is no input that could produce the output of human thought other than the magical fairy dust that exists within our souls. I don’t believe this. I think a sufficiently advanced input could arrive at the holistic output of human thought. This doesn’t have to be LLMs.
I haven’t seen anyone here (or basically anyone at all, for that matter) suggest that there’s literally no way to create mentality like ours other than being exactly like us. The argument is just that LLMs are not even on the right track to do something like that. The technology is impressive in a lot of ways, but it is in no way comparable to even a rudimentary mind in the sense that people have minds, and there’s no amount of tweaking or refining the basic approach that’s going to move it in that direction. “Genuine” (in the sense of human-like) AI made from non-human stuff is certainly possible in principle, but LLMs are not even on that trajectory.
Even setting that aside, I think framing this as an I/O problem elides some really tricky and deep conceptual content, and suggests some fundamental misunderstanding about how complex this problem is. What on Earth does “the output of human thought” mean in this sense? Clearly you don’t really mean human thought, because you obviously think whatever “output” you’re looking for can be instantiated in non-human systems. It must mean human-like thought, but human-like in what sense? Which features are important to preserve, and which are incidental or parochial to the way humans do human-like thought? How you answer that question greatly influences how you evaluate putative cases of “genuine” AI, and it’s possible to build in a great deal of hidden bias if we don’t think carefully and deliberately about this. From what I’ve seen, virtually none of the AI hypers are thinking carefully or deliberately about this.
That’s an unfalsifiable belief. “We don’t know how sentience works so they could be sentient” is easily reversed because it’s based entirely on the fact that we can’t technically disprove or prove it.
There’s a distinction between unfalsifiable and currently unknown. If we did someday know how sentience worked, my stance would be falsifiable. Currently it’s not, and it’s fine to admit we don’t know. You don’t need to take a stance when you lack information.
To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience.
How is that plausible? The human brain has more processing power than a snake’s. Which has more power than a bacterium’s (equivalent of a) brain. Those two things are still experiencing consciousness/sentience. Bacteria will look out for their own interests, will chatGPT do that? No, chatGPT is a perfect slave, just like every computer program ever written
chatGPT : freshman-year-“hello world”-program
human being : amoeba
(the : symbol means it’s being analogized to something)
a human is a sentience made up of trillions of unicellular consciousnesses.
chatGPT is a program made up of trillions of data points. But they’re still just data points, which have no sentience or consciousness.
Both are something much greater than the sum of their parts, but in a human’s case, those parts were sentient/conscious to begin with. Amoebas will reproduce and kill and eat just like us, our lung cells and nephrons and etc are basically little tiny specialized amoebas. ChatGPT doesn’t…do anything, it has no will