You are viewing a single thread.
View all comments View context
14 points

It’s also a bunch of brainfarting drivel that could be summarized:

Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

Or

Read Asimov’s I, Robot. Then note that in our reality, we’ve not yet invented the Three Laws of Robotics.

permalink
report
parent
reply
19 points

Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

You make his position sound way more measured and responsible than it is.

His ‘effective safety measures’ are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.

permalink
report
parent
reply
8 points

This guy is going to be very upset when he realizes that there is no absolute morality.

permalink
report
parent
reply
9 points

A good chunk of philosophers do believe there are moral facts, but this is less useful for these purposes than one would think

permalink
report
parent
reply
19 points

If yud just got to the point, people would realise he didn’t have anything worth saying.

It’s all about trying to look smart without having any actual insights to convey. No wonder he’s terrified of being replaced by LLMs.

permalink
report
parent
reply
14 points

LLMs are already more coherent and capable of articulating and arguing a concrete point.

permalink
report
parent
reply
15 points

Before we accidentally make an AI capable of posing existential risk to human being safety

It’s cool to know that this isn’t a real concern and therefore in a clear vantage of how all the downstream anxiety is really a piranha pool of grifts for venture bucks and ad clicks.

permalink
report
parent
reply
2 points

That’s a summary of his thinking overall but not at all what he wrote in the post. What he wrote in the post is that people assume that his theory depends on an assumption (monomaniacal AIs) but he’s saying that actually, his assumptions don’t rest on that at all. I don’t think he’s shown his work adequately, however, despite going on and on and fucking on.

permalink
report
parent
reply

SneerClub

!sneerclub@awful.systems

Create post

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it’s amusing debate.

[Especially don’t debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

Community stats

  • 80

    Monthly active users

  • 326

    Posts

  • 7.8K

    Comments