26 points

He’s talking like it’s 2010. He really must feel like he deserves attention, and it’s not likely fun for him to learn that the actual practitioners have advanced past the need for his philosophical musings. He wanted to be the foundation, but he was scaffolding, and now he’s lining the floors of hamster cages.

permalink
report
reply
11 points

He wanted to be the foundation, but he was scaffolding

That’s a good quote, did you come up with that? I for one would be ecstatic to be the scaffolding of a research field.

permalink
report
parent
reply
11 points

That’s 100% my weird late-night word choices. You can reuse it for whatever.

I agree with your sentiment, but the wording is careful. Scaffolding is inherently temporary. It only is erected in service of some further goal. I think what I wanted to get across is that Yud’s philosophical world was never going to be a permanent addition to any field of science or maths, for lack of any scientific or formal content. It was always a farfetched alternative fueled by science-fiction stories and contingent on a technological path that never came to be.

Maybe an alternative metaphor is that Yud wanted to develop a new kind of solar panel by reinventing electrodynamics and started by putting his ladder against his siding and climbing up to his roof to call the aliens down to reveal their secrets. A decade later, the ladder sits fallen and moss-covered, but Yud is still up there, trapped by his ego, ranting to anybody who will listen and throwing rocks at the contractors installing solar panels on his neighbor’s houses.

permalink
report
parent
reply
8 points

Scaffolding is actually useful, he’s completely irrelevant to actual thought about this. However he is unfortunately influential to some silicon valley nonsense.

permalink
report
parent
reply
23 points

A year and two and a half months since his Time magazine doomer article.

No shut downs of large AI training - in fact only expanded. No ceiling on compute power. No multinational agreements to regulate GPU clusters or first strike rogue datacenters.

Just another note in a panic that accomplished nothing.

permalink
report
reply
18 points

No shut downs of large AI training

At least the lack of Rationalist suicide bombers running at data centers and shouting ‘Dust specks!’ is encouraging.

permalink
report
parent
reply
11 points

considering that the more extemist faction is probably homeschooled, i don’t expect that any of them has ochem skills good enough to not die in mysterious fire when cooking device like this

permalink
report
parent
reply
6 points

so many stupid ways to die, you wouldn’t believe

permalink
report
parent
reply
10 points
*

why would rationalists do something difficult and scary in real life, when they could be wanking each other off with crank fanfiction and buying castles manor houses for the benefit of the future

permalink
report
parent
reply
7 points

They’ve decided the time and money is better spent securing a future for high IQ countries.

permalink
report
parent
reply
14 points

It’s also a bunch of brainfarting drivel that could be summarized:

Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

Or

Read Asimov’s I, Robot. Then note that in our reality, we’ve not yet invented the Three Laws of Robotics.

permalink
report
parent
reply
19 points

If yud just got to the point, people would realise he didn’t have anything worth saying.

It’s all about trying to look smart without having any actual insights to convey. No wonder he’s terrified of being replaced by LLMs.

permalink
report
parent
reply
14 points

LLMs are already more coherent and capable of articulating and arguing a concrete point.

permalink
report
parent
reply
19 points

Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

You make his position sound way more measured and responsible than it is.

His ‘effective safety measures’ are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.

permalink
report
parent
reply
8 points

This guy is going to be very upset when he realizes that there is no absolute morality.

permalink
report
parent
reply
15 points

Before we accidentally make an AI capable of posing existential risk to human being safety

It’s cool to know that this isn’t a real concern and therefore in a clear vantage of how all the downstream anxiety is really a piranha pool of grifts for venture bucks and ad clicks.

permalink
report
parent
reply
2 points

That’s a summary of his thinking overall but not at all what he wrote in the post. What he wrote in the post is that people assume that his theory depends on an assumption (monomaniacal AIs) but he’s saying that actually, his assumptions don’t rest on that at all. I don’t think he’s shown his work adequately, however, despite going on and on and fucking on.

permalink
report
parent
reply
4 points

@Shitgenstein1 @sneerclub

Might have got him some large cash donations.

permalink
report
parent
reply
7 points

ETH donations

permalink
report
parent
reply
2 points

tap a well dry as ye may, I guess

permalink
report
parent
reply
22 points

“Nah” is a great reaction to any wall of text by this bozo, really.

permalink
report
reply
11 points

Just say he’s yapping, because that’s all this dipshit’s doing

permalink
report
parent
reply
9 points

That or “i like pie”

permalink
report
parent
reply
20 points

Quoth Yud:

There is a way of seeing the world where you look at a blade of grass and see “a solar-powered self-replicating factory”. I’ve never figured out how to explain how hard a superintelligence can hit us, to someone who does not see from that angle. It’s not just the one fact.

It’s almost as if basing an entire worldview upon a literal reading of metaphors in grade-school science books and whatever Carl Sagan said just after “these edibles ain’t shit” is, I dunno, bad?

permalink
report
reply
21 points

permalink
report
parent
reply
16 points
*

a solar-powered self-replicating factory

Only, it isn’t a factory. As the only thing it produces is copies of itself, and not products like factories do. Von Neumann machines would have been a better comparison

permalink
report
parent
reply
9 points

There is a way of seeing the world where you look at a blade of grass and see “a solar-powered self-replicating factory”.

this is just “fucking magnets, how do they work?” said different way. both are fascinated with shit that they could understand, but don’t even attempt to. both even built sort of a cult

EY is just ICP for people that don’t do face paint and are high on their own farts

permalink
report
parent
reply
3 points

Yud is as if Woody Allen had grown up on a diet of Neal Stephenson novels

permalink
report
parent
reply
19 points

What the fuck any of this mean? What could this be in response to? Was there a bogo deal on $5 words?

permalink
report
reply
17 points
*

I’m getting a tramp stamp that says “Remember the Markov Monkey Fallacy”

permalink
report
parent
reply
12 points

Must be infuriating to explain stochastic parrot to a community only to have it parroted poorly while rejecting the original premise.

permalink
report
parent
reply
12 points

I’m one of the lucky 10k who found out what a paperclip maximizer is and it’s dumb as SHIT!

Actually maybe it’s time for me to start grifting too. How’s my first tweet look?

What if ChatGPT derived the anti-life equation and killed every non-black that says the n-word 😟

permalink
report
parent
reply
12 points

Paperclip maximizer is a funny concept because we are already living inside of one. The paperclips are monetary value in wealthy people’s stock portfolios.

permalink
report
parent
reply

SneerClub

!sneerclub@awful.systems

Create post

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

Posts or links discussing our very good friends should have “NSFW” ticked (Nice Sneers For Winners).

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from our very good friends.

This is sneer club, not debate club. Unless it’s amusing debate.

[Especially don’t debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

Community stats

  • 686

    Monthly active users

  • 297

    Posts

  • 7.1K

    Comments