cross-posted from: https://lemmy.world/post/11178564

Scientists Train AI to Be Evil, Find They Can’t Reverse It::How hard would it be to train an AI model to be secretly evil? As it turns out, according to Anthropic researchers, not very.

You are viewing a single thread.
View all comments View context
11 points
*

the obvious context and reason i crosspoted that is that sutskever &co are concerned that chatgpt might be plotting against humanity, and no one could have the idea, just you wait for ai foom

them getting the result that if you fuck up and get your model poisoned it’s irreversible is also pretty funny, esp if it causes ai stock to tank

permalink
report
parent
reply
9 points

to be read in the low bit cadence of SF2 Guile “ai doom!”

It’s not a huge surprise that these AI models that indiscriminately inhale a bunch of ill-gotten inputs are prone to poisoning. Fingers crossed that it makes the number go down!

permalink
report
parent
reply

SneerClub

!sneerclub@awful.systems

Create post

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it’s amusing debate.

[Especially don’t debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

Community stats

  • 201

    Monthly active users

  • 335

    Posts

  • 7.9K

    Comments