cross-posted from: https://lemmy.world/post/11178564

Scientists Train AI to Be Evil, Find They Can’t Reverse It::How hard would it be to train an AI model to be secretly evil? As it turns out, according to Anthropic researchers, not very.

You are viewing a single thread.
View all comments
14 points
*

Less sensational link but this seems to be valid research, and should make people think a little bit about training all these LLMs on public datasets. (wait input from the internet is not to be trusted? astronaut.jpg)

Anyway, this also remind me of the period I saw far right people trying to poison certain common words as slurs for people they disliked (in some weird 4d chess move, both in some move of plausible deniability and in a move to go something like ‘if we call jewish people gems, they cannot block us as then they would need to block the word gems!’ dumb move). Didn’t seem to work thankfully.

permalink
report
reply

SneerClub

!sneerclub@awful.systems

Create post

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it’s amusing debate.

[Especially don’t debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

Community stats

  • 136

    Monthly active users

  • 326

    Posts

  • 7.8K

    Comments