13 points

Big Yud: You try to explain how airplane fuel can melt a skyscraper, but your calculation doesn’t include relativistic effects, and then the 9/11 conspiracy theorists spend the next 10 years talking about how you deny relativity.

Similarly: A paperclip maximizer is not “monomoniacally” “focused” on paperclips. We talked about a superintelligence that wanted 1 thing, because you get exactly the same results as from a superintelligence that wants paperclips and staples (2 things), or from a superintelligence that wants 100 things. The number of things It wants bears zero relevance to anything. It’s just easier to explain the mechanics if you start with a superintelligence that wants 1 thing, because you can talk about how It evaluates “number of expected paperclips resulting from an action” instead of “expected paperclips * 2 + staples * 3 + giant mechanical clocks * 1000” and onward for a hundred other terms of Its utility function that all asymptote at different rates.

The only load-bearing idea is that none of the things It wants are galaxies full of fun-having sentient beings who care about each other. And the probability of 100 uncontrolled utility function components including one term for Fun are ~0, just like it would be for 10 components, 1 component, or 1000 components. 100 tries at having monkeys generate Shakespeare has ~0 probability of succeeding, just the same for all practical purposes as 1 try.

(If a googol monkeys are all generating using English letter-triplet probabilities in a Markov chain, their probability of generating Shakespeare is vastly higher but still effectively zero. Remember this Markov Monkey Fallacy anytime somebody talks about how LLMs are being trained on human text and therefore are much more likely up with human values; an improbable outcome can be rendered “much more likely” while still being not likely enough.)

An unaligned superintelligence is “monomaniacal” in only and exactly the same way that you monomaniacally focus on all that stuff you care about instead of organizing piles of dust specks into prime-numbered heaps. From the perspective of something that cares purely about prime dust heaps, you’re monomaniacally focused on all that human stuff, and it can’t talk you into caring about prime dust heaps instead. But that’s not because you’re so incredibly focused on your own thing to the exclusion of its thing, it’s just, prime dust heaps are not inside the list of things you’d even consider. It doesn’t matter, from their perspective, that you want a lot of stuff instead of just one thing. You want the human stuff, and the human stuff, simple or complicated, doesn’t include making sure that dust heaps contain a prime number of dust specks.

Any time you hear somebody talking about the “monomaniacal” paperclip maximizer scenario, they have failed to understand what the problem was supposed to be; failed at imagining alien minds as entities in their own right rather than mutated humans; and failed at understanding how to work with simplified models that give the same results as complicated models

permalink
report
reply
11 points
*

And the number of angels that can dance on the head of a pin? 9/11

permalink
report
reply
10 points

Did you remember to take relativity into account?

permalink
report
parent
reply
11 points

Angels can’t melt steel pins

permalink
report
parent
reply
19 points

What the fuck any of this mean? What could this be in response to? Was there a bogo deal on $5 words?

permalink
report
reply
17 points
*

I’m getting a tramp stamp that says “Remember the Markov Monkey Fallacy”

permalink
report
parent
reply
12 points

Must be infuriating to explain stochastic parrot to a community only to have it parroted poorly while rejecting the original premise.

permalink
report
parent
reply
12 points

I’m one of the lucky 10k who found out what a paperclip maximizer is and it’s dumb as SHIT!

Actually maybe it’s time for me to start grifting too. How’s my first tweet look?

What if ChatGPT derived the anti-life equation and killed every non-black that says the n-word 😟

permalink
report
parent
reply
12 points

Paperclip maximizer is a funny concept because we are already living inside of one. The paperclips are monetary value in wealthy people’s stock portfolios.

permalink
report
parent
reply
23 points

A year and two and a half months since his Time magazine doomer article.

No shut downs of large AI training - in fact only expanded. No ceiling on compute power. No multinational agreements to regulate GPU clusters or first strike rogue datacenters.

Just another note in a panic that accomplished nothing.

permalink
report
reply
4 points

@Shitgenstein1 @sneerclub

Might have got him some large cash donations.

permalink
report
parent
reply
7 points

ETH donations

permalink
report
parent
reply
2 points

tap a well dry as ye may, I guess

permalink
report
parent
reply
14 points

It’s also a bunch of brainfarting drivel that could be summarized:

Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

Or

Read Asimov’s I, Robot. Then note that in our reality, we’ve not yet invented the Three Laws of Robotics.

permalink
report
parent
reply
19 points

If yud just got to the point, people would realise he didn’t have anything worth saying.

It’s all about trying to look smart without having any actual insights to convey. No wonder he’s terrified of being replaced by LLMs.

permalink
report
parent
reply
14 points

LLMs are already more coherent and capable of articulating and arguing a concrete point.

permalink
report
parent
reply
19 points

Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

You make his position sound way more measured and responsible than it is.

His ‘effective safety measures’ are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.

permalink
report
parent
reply
8 points

This guy is going to be very upset when he realizes that there is no absolute morality.

permalink
report
parent
reply
15 points

Before we accidentally make an AI capable of posing existential risk to human being safety

It’s cool to know that this isn’t a real concern and therefore in a clear vantage of how all the downstream anxiety is really a piranha pool of grifts for venture bucks and ad clicks.

permalink
report
parent
reply
2 points

That’s a summary of his thinking overall but not at all what he wrote in the post. What he wrote in the post is that people assume that his theory depends on an assumption (monomaniacal AIs) but he’s saying that actually, his assumptions don’t rest on that at all. I don’t think he’s shown his work adequately, however, despite going on and on and fucking on.

permalink
report
parent
reply
18 points

No shut downs of large AI training

At least the lack of Rationalist suicide bombers running at data centers and shouting ‘Dust specks!’ is encouraging.

permalink
report
parent
reply
11 points

considering that the more extemist faction is probably homeschooled, i don’t expect that any of them has ochem skills good enough to not die in mysterious fire when cooking device like this

permalink
report
parent
reply
6 points

so many stupid ways to die, you wouldn’t believe

permalink
report
parent
reply
10 points
*

why would rationalists do something difficult and scary in real life, when they could be wanking each other off with crank fanfiction and buying castles manor houses for the benefit of the future

permalink
report
parent
reply
7 points

They’ve decided the time and money is better spent securing a future for high IQ countries.

permalink
report
parent
reply
17 points

That’s the longest subtweet I’ve ever read. What internet slight even compelled him to write all that?

permalink
report
reply
15 points

Someone disagreed with his pet theory, what else.

permalink
report
parent
reply

SneerClub

!sneerclub@awful.systems

Create post

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it’s amusing debate.

[Especially don’t debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

Community stats

  • 80

    Monthly active users

  • 326

    Posts

  • 7.8K

    Comments