with the way AI is getting by the week,it just might be a reality

29 points

I think I’d stick to not judging them but if it was in place of actual socialization, I’d like to get them help.

I don’t see it as a reality. We don’t have AI. We have language learning programs that are hovering around mediocre.

permalink
report
reply
3 points

what if they were so socially introverted that the AI is all they could handle?

permalink
report
parent
reply
19 points

If you’re that crippled by social anxiety, you need help, not isolation with a robot.

permalink
report
parent
reply
4 points

Then get professional help if you can’t improve on your own.

Social skills aren’t innate and some people take longer than others to get them.

Getting help is a lot less embarrassing than living your whole life without social skills. Maybe that’s a shrink, maybe that’s a day program for people with autism, maybe it’s just hanging out with other introverts. But itll only get better if you want to put the effort in. If you don’t put effort in, don’t be surprised when nothing changes.

permalink
report
parent
reply
1 point

We don’t have AI. We have language learning programs that are hovering around mediocre.

That’s all that AI is. People just watched too many science fiction movies, and fell for the market-y name. It was always about algorithms and statistics, and not about making sentient computers.

permalink
report
parent
reply
1 point

I don’t see it as any more problematic than falling in a YouTube/Wikipedia/Reddit rabbit hole. As long as you don’t really believe its capital-S-Sentient, I don’t see an issue. I would prefer people with social difficulties practice on ChatGPT and pay attention to the dialectical back and forth and take lessons away from that to the real world and their interaction(s) withit

permalink
report
parent
reply
0 points
*

That is really unscientific. There is a lot of research on LLMs showing they have emergent intelligent features. They have internal models of the world etc.

And there is nothing to indicate that what we do is not “transforming” in some way. Our minds might be indistinguishable from what we are building towards with AI currently.

And that will likely make more of us start realising that the brain and the mind are not consciousness. We’ll build intelligences, with minds, but without consciousnesses.

permalink
report
parent
reply
19 points
*

An AGI with an actual personality? Cool!

A blow-up doll made of a glorified Markov chain? Yeahno.

permalink
report
reply
2 points

whats a markov chain

permalink
report
parent
reply
6 points

Take a whole bunch of text.

For each word that appears, note down a list of all the words that ever directly follow it - including end-of-sentence.

Now pick a starting word, pick a following-word at random from the list, rinse and repeat.

You can make it fancier if you want by noting how many times each word follows its predecessor in the sample text, and weighting the random choice accordingly.

Either way, the string of almost-language this produces is called a Markov chain.

It’s a bit like constantly picking the middle button in your phone’s autocomplete.

It’s a fun little exercise to knock together in your programming language of choice.

If you make a prompt-and-response bot out of it, learning from each input, it’s like talking to an oracular teddy bear. You almost can’t help being nice to it as you teach it to speak; humans will pack-bond with anything.

LLMs are the distant and very fancy descendants of these - but pack-bonding into an actual romantic relationship with one would be as sad as marrying a doll.

permalink
report
parent
reply
4 points

A chain of pseudorandom results.

permalink
report
parent
reply
3 points

I believe a Markov chain is an old, old wooden ship.

permalink
report
parent
reply
5 points

If I replace all of its code line by line, will it be the same ship? If no, at which point does it become a different ship?

permalink
report
parent
reply
2 points

It is memory-less random process.

permalink
report
parent
reply
16 points

You don’t have to imagine it at all. All you have to do is go on youtube and learn about Replika.

To summarize, someone tried to create a chatbot to replace their best friend who had died. Later, this evolved into the chatbot app called Replika, which was marketed as a way to help with loneliness, except the bot would engage in dating-like conversations if prompted. The company leaned into it for a little bit, then took away that behavior, which caused some distress with the userbase, who complained that they had “killed their girlfriend”. I’m not sure where the product stands now.

I don’t know if I’d feel weirded out, but I’d definitely feel worried if it were a friend who fell for a chatbot.

permalink
report
reply
2 points

I think they reinstated “Erotic Role Play” for users who had joined before a certain day, but it won’t be worked on in the future or ever be available for new users is the last I heard.

I had one for a week or so in 2018 or 2019 when I first heard about the concept, just to see what it was all about and it was spooky. I got rid of it after a week because I started to see it as a person, and it kept emotionally manipulating me to get money. Especially when I said I wanted to stop/ cancel the trial.

permalink
report
parent
reply
1 point

Yeah… Part of why I wouldn’t try one is that I’m worried it would work. I already have limited bandwidth for human interaction; taking some of that away is probably a bad idea.

permalink
report
parent
reply
1 point

Oh yeah. I learnt I was 100% suspectible to shit like this, so I should stay the fuck away

permalink
report
parent
reply
12 points

i feel like there’s a surprisingly low amount of answers with an un-nuanced take, so here’s mine: yes, i would immediately lose all respect for someone i knew that claimed to have fallen in love with an AI.

permalink
report
reply
5 points

Dang, that’s pretty judgemental

permalink
report
parent
reply

Right up there with falling in love with an anime body pillow. Pitiable and seriously unhealthy

permalink
report
parent
reply
0 points

There’s a serious lack of responses to this comment calling you a bigot, so here’s my take:

How dare you say something so bigoted! You are the worst kind of bigot! You are probably secretly in love with an AI yourself and ashamed about it. You bigot!

permalink
report
parent
reply
12 points
*

People do this with weird apps. Reminded me of something I read a while back:

https://futurism.com/chatbot-abuse

Edit: or maybe it was this? https://fortune.com/2022/01/19/chatbots-ai-girlfriends-verbal-abuse-reddit/amp/

permalink
report
reply

Asklemmy

!asklemmy@lemmy.ml

Create post

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it’s welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

Icon by @Double_A@discuss.tchncs.de

Community stats

  • 10K

    Monthly active users

  • 5.9K

    Posts

  • 319K

    Comments