Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

You are viewing a single thread.
View all comments View context
2 points
Deleted by creator
permalink
report
parent
reply
32 points

You asked how chatgpt is not AI.

Chatgpt is not AI because it is not sentient. It is not sentient because it is a search engine, it was not made to be sentient.

Of course machines could theoretically, in the far future, become sentient. But LLMs will never become sentient.

permalink
report
parent
reply
13 points

the thing is, we used to know this. 15 years ago, the prevailing belief was that AI would be built by combining multiple subsystems together - an LLM, visual processing, a planning and decision making hub, etc… we know the brain works like this - idk where it all got lost. profit, probably.

permalink
report
parent
reply
11 points

It got lost because the difficulty of actually doing that is overwhelming, probably not even accomplishable in our lifetimes, and it is easier to grift and get lost in a fantasy.

permalink
report
parent
reply

So you can’t name a specific task that bots can’t do?

reproduce without consensual assistance

move

permalink
report
parent
reply
2 points
Deleted by creator
permalink
report
parent
reply

name a specific task that bots can’t do

Self-actualize.

In a strict sense yes, humans do Things based on if > then stimuli. But we self assign ourselves these Things to do, and chat bots/LLMs can’t. They will always need a prompt, even if they could become advanced enough to continue iterating on that prompt on its own.

I can pick up a pencil and doodle something out of an unquantifiable desire to make something. Midjourney or whatever the fuck can create art, but only because someone else asks it to and tells it what to make. Even if we created a generative art bot that was designed to randomly spit out a drawing every hour without prompts, that’s still an outside prompt - without programming the AI to do this, it wouldn’t do it.

Our desires are driven by inner self-actualization that can be affected by outside stimuli. An AI cannot act without us pushing it to, and never could, because even a hypothetical fully sentient AI started as a program.

permalink
report
parent
reply
2 points
Deleted by creator
permalink
report
parent
reply

Most of the people in this thread seem to think humans have a unique special ability that machines can never replicate, and that comes off as faith-based anthropocentric religious thinking- not the materialist view that underlies Marxism

First off, materialism doesn’t fucking mean having to literally quantify the human soul in order for it to be valid, what the fuck are you talking about friend

Secondly, because we do. We as a species have, from the very moment we invented written records, have wondered about that spark that makes humans human and we still don’t know. To try and reduce the entirety of the complex human experience to the equivalent of an If > Than algorithm is disgustingly misanthropic

I want to know what the end goal is here. Why are you so insistent that we can somehow make an artificial version of life? Why this desire to somehow reduce humanity to some sort of algorithm equivalent? Especially because we have so many speculative stories about why we shouldn’t create The Torment Nexus, not the least of which because creating a sentient slave for our amusement is morally fucked.

Bots do something different, even when I give them the same prompt, so that seems to be untrue already.

You’re being intentionally obtuse, stop JAQing off. I never said that AI as it exists now can only ever have 1 response per stimulus. I specifically said that a computer program cannot ever spontaneously create an input for itself, not now and imo not ever by pure definition (as, if it’s programmed, it by definition did not come about spontaneously and had to be essentially prompted into life)

I thought the whole point of the exodus to Lemmy was because y’all hated Reddit, why the fuck does everyone still act like we’re on it

permalink
report
parent
reply
11 points
*

Oh that’s easy. There are plenty of complex integrals or even statistics problems that computers still can’t do properly because the steps for proper transformation are unintuitive or contradictory with steps used with simpler integrals and problems.

You will literally run into them if you take a simple Calculus 2 or Stats 2 class, you’ll see it on chegg all the time that someone trying to rack up answers for a resume using chatGPT will fuck up the answers. For many of these integrals, their answers are instead hard-programmed into the calculator like Symbolab, so the only reason that the computer can ‘do it’ is because someone already did it first, it still can’t reason from first principles or extrapolate to complex theoretical scenarios.

That said, the ability to complete tasks is not indicative of sentience.

permalink
report
parent
reply
2 points
Deleted by creator
permalink
report
parent
reply
8 points
*

Lol, ‘idealist axiom’. These things can’t even fucking reason out complex math from first principles. That’s not a ‘view that humans are special’ that is a very physical limitation of this particular neural network set-up.

Sentience is characterized by feeling and sensory awareness, and an ability to have self-awareness of those feelings and that sensory awareness, even as it comes and goes with time.

Edit: Btw computers are way better at most math, particularly arithmetic, than humans. Imo, the first thing a ‘sentient computer’ would be able to do is reason out these notoriously difficult CS things from first principles and it is extremely telling that that is not in any of the literature or marketing as an example of ‘sentience’.

Damn this whole thing of dancing around the question and not actually addressing my points really reminds me of a ChatGPT answer. It would n’t surprise me if you were using one.

permalink
report
parent
reply

the_dunk_tank

!the_dunk_tank@hexbear.net

Create post

It’s the dunk tank.

This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.

Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.

Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.

Rule 3: No sectarianism.

Rule 4: TERF/SWERFs Not Welcome

Rule 5: No ableism of any kind (that includes stuff like libt*rd)

Rule 6: Do not post fellow hexbears.

Rule 7: Do not individually target other instances’ admins or moderators.

Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to !shitreactionariessay@lemmygrad.ml

Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again

Community stats

  • 1.7K

    Monthly active users

  • 4.5K

    Posts

  • 94K

    Comments