17 points

Reasoning about future AIs is hard

“so let’s just theorycraft eugenics instead” is like 50% of rationalism.

permalink
report
reply
7 points
*

Well of course, everything is determined by genetics, including, as the EA forum taught me today, things like whether someone is vegetarian so to solve that problem (as well as any other problem) we need (and I quote) “human gene editing”. /s

permalink
report
parent
reply
16 points

This site has had 1 0 days without a eugenics post. Previous record: 1.

permalink
report
reply
9 points

Seems like the time between the posts is increasing, soon we will have a double event.

permalink
report
parent
reply
9 points

Shorter: “Let’s assume that I’m a godling. I will definitely be an evil god. Here’s how.”

permalink
report
reply
8 points
*

It’s not even eugenics to optimize ze genome to make ze uberbabies, OP mostly seems mad people are allowed to have non-procreative sex and couches it in a heavily loaded interpretation of inclusive fitness.

permalink
report
reply
8 points

I think people are misreading the post a little. It’s a follow on from the old AI x-risk argument: “evolution optimises for having kids, yet people use condoms! Therefore evolution failed to “align” humans to it’s goals, therefore aligning AI is nigh-impossible”.

As a commentator points out, for a “failure”, there sure do seem to be a lot of human kids around.

This post then decides to take the analogy further, and be like “If I was hypothetically a eugenicist god, and I wanted to hypothetically turn the entire population of humanity into eugenicists, it’d be really hard! Therefore we can’t get an AI to build us, like, a bridge, without it developing ulterior motives”.

You can hypothetically make this bad argument without supporting eugenics… but I wouldn’t put money on it.

permalink
report
reply
4 points

OK, so obviously “alignment” means “teach AI not to kill all humans”, but now I figure they also want to prevent AI from using all that computing power to endlessly masturbate, or compose hippie poems, or figure out Communism is the answer to humanity’s problems.

permalink
report
parent
reply
5 points

In practice, alignment means “control”.

And the the existential panic is realizing that control doesn’t scale. So rather than admit that goal “alignment” doesn’t mean what they think it is, rather than admit that darwinian evolution is useful but incomplete and cannot sufficiently explain all phenomena both at the macro and micro levels, rather than possibly consider that intelligence is abundant in systems all around us and we’re constantly in tenuous relationships at the edge of uncertainty with all of it,

it’s the end of all meaning aka the robot overlord.

permalink
report
parent
reply

SneerClub

!sneerclub@awful.systems

Create post

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it’s amusing debate.

[Especially don’t debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

Community stats

  • 201

    Monthly active users

  • 335

    Posts

  • 7.9K

    Comments