From the article:

This chatbot experiment reveals that, contrary to popular belief, many conspiracy thinkers aren’t ‘too far gone’ to reconsider their convictions and change their minds.

79 points

Another way of looking at it: “AI successfully used to manipulate people’s opinions on certain topics.” If it can persuade them to stop believing conspiracy theories, AI can also be used to make people believe conspiracy theories.

permalink
report
reply
49 points

Anything can be used to make people believe them. That’s not new or a challenge.

I’m genuinely surprised that removing such beliefs is feasible at all though.

permalink
report
parent
reply
5 points

If they’re gullible enough to be suckered into it, they can similarly be suckered out of it - but clearly the effect would not be permanent.

permalink
report
parent
reply
2 points

That doesn’t follow with the “if you didnt reason your way into a believe you can’t reason your way out” line. Considering religious ferver I’m more inclined to believe this line than yours.

permalink
report
parent
reply
1 point

I’ve always believed the adage that you can’t logic someone out of a position they didn’t logic themselves into. It protects my peace.

permalink
report
parent
reply
29 points

The researchers think a deep understanding of a given theory is vital to tackling errant beliefs. “Canned” debunking attempts, they argue, are too broad to address “the specific evidence accepted by the believer,” which means they often fail. Because large language models like GPT-4 Turbo can quickly reference web-based material related to a particular belief or piece of “evidence,” they mimic an expert in that specific belief; in short, they become a more effective conversation partner and debunker than can be found at your Thanksgiving dinner table or heated Discord chat with friends.

This is great news. The emotional labor needed to talk these people down is emotionally and mentally damaging. Offloading it to software is a great use of the technology that has real value.

permalink
report
reply
22 points
*

Let me guess, the good news is that conspiracism can be cured but the bad news is that LLMs are able to shape human beliefs. I’ll go read now and edit if I was pleasantly incorrect.

Edit: They didn’t test the model’s ability to inculcate new conspiracies, obviously that’d be a fun day at the office for the ethics review board. But I bet with a malign LLM it’s very possible.

permalink
report
reply
19 points

A piece of paper dropped on the ground can ‘shape human beliefs’. That’s literally a tool used in warfare.

The news here is that conspiratorial thinking can be relieved at all.

permalink
report
parent
reply
1 point

"AI is just a tool; is a bit naïve. The power of this tool and the scope makes this tool a devastating potential. It’s a good idea to be concerned and talk about it.

permalink
report
parent
reply
7 points

Agreed - but acting surprised that it can change opinions (for the worse) doesn’t make sense to me, that’s obvious, since anything can. That AI can potentially do so even more effectively than other things is indeed worth talking about as a society (and is again pretty obvious)

permalink
report
parent
reply
17 points

More like LLMs are just another type of propaganda. The only thing that can effectively retool conspiracy thinkers is a better education with a focus on developing critical thinking skills.

permalink
report
reply
13 points

All of this can be mitigated much more by ensuring each citizen has a decent education by modern standards. Turns out most of our problems can be fixed by helping each other.

permalink
report
reply

science

!science@lemmy.world

Create post

just science related topics. please contribute

note: clickbait sources/headlines aren’t liked generally. I’ve posted crap sources and later deleted or edit to improve after complaints. whoops, sry

Rule 1) Be kind.

lemmy.world rules: https://mastodon.world/about

I don’t screen everything, lrn2scroll

Community stats

  • 3.9K

    Monthly active users

  • 1.4K

    Posts

  • 15K

    Comments