titotal
EA as a movement was a combination of a few different groups (This account says Giving what we can/80000 hours, Givewell, and yudkowsky’s MIRI). However, the main source of early influx of people was the rationalist movement, as Yud had heavily promoted EA-style ideas in the sequences.
So if you look at surveys, right now a a relatively small percentage (like 15%) of EA’s first heard about it through lesswrong or SSC. But back in 2014, and earlier, Lesswrong was the number one onroad into the movement (like 30%) . (I’m sure a bunch of the other answers may have heard about it from rationalist friends as well). I think it would have been even more if you go back earlier.
Nowadays, most of the recruiting is independent from the rationalists, so you have a bunch of people coming in and being like, what’s with all the weird shit? However they still adopt a ton of rationalist ideas and language, and the EA forum is run by the same people as Lesswrong. It leads to some tension: someone wrote a post saying that “yudkowsky is frequently confidently, egregiousl wrong”, and it was somewhat upvoted on EA forum but massively downvoted on Lesswrong.
Hey, thanks so much for looking through it! If you’re alright with messaging me your email or something, I might consult you on some more related things.
With your permission, I’m tempted to edit this response into the original post, it’s really good. Have you looked over Yudkowsky’s word salad in the EA forum thread? Would be interested in getting your thoughts on that as well.
Hidden away in the appendix:
A quick note on how we use quotation marks: we sometimes use them for direct quotes and sometimes use them to paraphrase. If you want to find out if they’re a direct quote, just ctrl-f in the original post and see if it is or not.
This is some real slimy shit. You can compare the “quotes” to Chloe’s account, and see how much of a hitjob this is.
Thanks, I love these answers! I’ll drop a DM on matrix for further questions.
This rather economic recycling allows a living cell to absorb damage that would be catastrophic when you just assume that everything works forever just as you imagined. I don’t have a guess how much more energy would be expended in reassembly of diamondoids, @titotal@awful.systems might have an estimate, but i guess it’s some 1-2 orders of magnitude more
The DMS researchers were estimating something on the order of 5 eV for mechanically dropping a single pair of Carbon atoms onto the surface of diamond. I’m not sure how to directly compare this to the biological case.
I think people are misreading the post a little. It’s a follow on from the old AI x-risk argument: “evolution optimises for having kids, yet people use condoms! Therefore evolution failed to “align” humans to it’s goals, therefore aligning AI is nigh-impossible”.
As a commentator points out, for a “failure”, there sure do seem to be a lot of human kids around.
This post then decides to take the analogy further, and be like “If I was hypothetically a eugenicist god, and I wanted to hypothetically turn the entire population of humanity into eugenicists, it’d be really hard! Therefore we can’t get an AI to build us, like, a bridge, without it developing ulterior motives”.
You can hypothetically make this bad argument without supporting eugenics… but I wouldn’t put money on it.
ahh, I fucking haaaate this line of reasoning. Basically saying “If we’re no worse than average, therefore there’s no problem”, followed by some discussion of “base rates” of harrassment or whatever.
Except that the average rate of harrassment and abuse, in pretty much every large group, is unacceptably high unless you take active steps to prevent it. You know what’s not a good way to prevent it? Downplaying reports of harrassment and calling the people bringing attention to it biased liars, and explicitly trying to avoid kicking out harmful characters.
Nothing like a so-called “effective altruist” crowing about having a C- passing grade on the sexual harrassment test.
Solomonoff induction is a big rationalist buzzword. It’s meant to be the platonic ideal of bayesian reasoning which if implemented would be the best deducer in the world and get everything right.
It would be cool if you could build this, but it’s literally impossible. The induction method is provably incomputable.
The hope is that if you build a shitty approximation to solomonoff induction that “approaches” it, it will perform close to the perfect solomonoff machine. Does this work? Not really.
My metaphor is that it’s like coming to a river you want to cross, and being like “Well Moses, the perfect river crosser, parted the water with his hands, so if I just splash really hard I’ll be able to get across”. You aren’t Moses. Build a bridge.
For people who don’t want to go to twitter, heres the thread:
Doomers: “YoU cAnNoT dErIvE wHaT oUgHt fRoM iS” 😵💫
Reality: you literally can derive what ought to be (what is probable) from the out-of-equilibrium thermodynamical equations, and it simply depends on the free energy dissipated by the trajectory of the system over time.
While I am purposefully misconstruing the two definitions here, there is an argument to be made by this very principle that the post-selection effect on culture yields a convergence of the two
How do you define what is “ought”? Based on a system of values. How do you determine your values? Based on cultural priors. How do those cultural priors get distilled from experience? Through a memetic adaptive process where there is a selective pressure on the space of cultures.
Ultimately, the value systems that survive will be the ones that are aligned towards growth of its ideological hosts, i.e. according to memetic fitness.
Memetic fitness is a byproduct of thermodynamic dissipative adaptation, similar to genetic evolution.