zogwarg
It makes you wonder about the specifics:
- Did the 1.5 workers assigned for each car mostly handle issues with the same cars?
- Was it a big random pool?
- Or did each worker have their geographic area with known issues ?
Maybe they could have solved context issues and possible latency issues by seating the workers in the cars, and for extra quick intervention speed put them in the driver’s seat. Revolutionary. (Shamelessly stealing adam something’s joke format about trains)
Meanwhile some of the comments are downright terrifying, also the whole “research” output is overly-detailed yet lacking any substance, and deeply deeply in fantasy land, but all the comments a debating in favour of or against what is perceived as “real work”, and in terms of presentation “vibes”.
I mean my parents always said that fascist/cultish movements have issues distinguishing signified and signifier, but good grief. (Yes too much Lacan in the household)
Not every rationalist I’ve met has been nice or smart ^^.
I think it’s hard to grow up in our society, without harboring a kernel of fascism in our hearts, it’s easy to fall into the constantly sold “everything would work better if we just put the right people in charge”. With varying definitions of who the “right people” are:
- Racism
- Eugenics
- Benevolent AI
- Fellow tribe,
- The enlightened who can read “the will of the people” or who are able to “carve reality at the joints”
- Some brands of “sovereign citizen” or corporate libertarianism (I’m the best person in charge of me!).
- The positivist invokers of ScientificProgress™
Do they deserve better? Absolutely, but you can’t remove their agency, they ultimately chose this. The world is messy and broken, it’s fine not to make too much peace with that, but you have to ponder your ends and your means more thoughtfully than a lot of EAs/Rationalists do. Falling prey to magical thinking is a choice, and/or a bias you can overcome (Which I find extremely ironic given the bias correction advertising in Rationalists spheres)
Student: I wish I could find a copy of one of those AIs that will actually expose to you the human-psychology models they learned to predict exactly what humans would say next, instead of telling us only things about ourselves that they predict we’re comfortable hearing. I wish I could ask it what the hell people were thinking back then.
I think this part conveys the root insanity of Yud, failing to understand that language is a co-operative game between humans, that have to trust in common shared lived experiences, to believe the message was conveyed successfully.
But noooooooo, magic AI can extract all the possible meanings, and internal states of all possible speakers in all possible situations from textual descriptions alone: because: ✨bayes✨
The fact that such a (LLM based) system would almost certainly not be optimal for any conceivable loss function / training set pair seems to completely elude him.
Brawndo Blockchain has got what plants LLMs crave, it’s got electrolytes ledgers.
“Once we get AGI, we’ll turn the crank one more time—or two or three more times—and AI systems will become superhuman—vastly superhuman. They will become qualitatively smarter than you or I, much smarter, perhaps similar to how you or I are qualitatively smarter than an elementary schooler. “
Also this doesn’t give enough credit to gradeschoolers. I certainly don’t think I am much smarter (if at all) than when I was a kid. Don’t these people remember being children? Do they think intelligence is limited to speaking fancy, and/or having the tools to solve specific problems? I’m not sure if it’s me being the weird one, to me growing up is not about becoming smarter, it’s more about gaining perspective, that is vital, but actual intelligence/personhood is a pre-requisite for perspective.
Hi, I’m going to be that OTHER guy:
Thank god not all dictionaries are prescriptivists and simply reflect the natural usage: Cambridge dictionary: Beg the question
On a side rant “begging the question” is a terrible name for this bias, and the very wikipedia page you’ve been so kind to offer provides the much more transparent “assuming the conclusion”.
If you absolutely wanted to translate from the original latin/greek (petitio principii/τὸ ἐν ἀρχῇ αἰτεῖσθαι): “beginning with an ask”, where ask = assumption of the premise. [Which happens to also be more transparent]
Just because we’ve inherited terrible translations does not mean we should seek to perpetuate them though sheer cultural inertia, and much less chastise others when using the much more natural meaning of the words “beg the question”. [I have to wonder if begging here is somehow a corruption of “begin” but I can’t find sources to back this up, and don’t want to waste too much time looking]
I feel mildly better, thanks.