BrickedKeyboard
Serious answer not from yudnowsky: the AI doesn’t do any of that. It helps people cheat on their homework, write their code and form letters faster, and brings in revenue. AI owner uses the revenue and buys gpus. With the GPUs they make the AI better. Now it can do a bit more than before and then they buy more GPUs and theoretically this continues until the list of tasks the AI can do includes “most of the labor in a chip fab” and GPUs become cheap and then things start to get crazy.
Same elementary school logic but I mean this is how a nuke works.
Just I think to summarize your beliefs: rationalists are wrong about a lot of things and assholes. And also the singularity (which predates yuds existence) is not in fact possible by the mechanism I outlined.
I think this is a big crux here. It’s one thing if its a cult around a false belief. It’s kind of a problem to sneer at a cult if the core S of it happens to be a true law of nature.
Or an analogy. I think gpt-4 is like the data from the Chicago pile. That data was enough to convince the domain experts then a nuke was going to work to the point they didn’t test Fat Man, you believe not. Clearly machine generality is possible, clearly it can solve every problem you named including, with the help of humans, ordering every part off digikey and loading the pick and place and inspecting the boards and building the wire harnesses and so on.
Note that the new line of thinking is “if you didn’t use at least 10,000 GPUs you didn’t try anything”. All the models that show even a spark of intelligence had very absurd amounts of compute put into their training. It is possible that galactica would have worked had facebook put more resources into it.
I’m old enough to remember this same line of argument about internet company hype. That everyone wanted a company as successful as Microsoft or yahoo and was throwing money into anything that had a .com. Of course, one of those was Amazon.com …
It’s possible for one field to be 100% a scam (blockchain, NFTs), while another field is 99% a scam (AI startups), and yet the 1% ends up creating a massive new sector of the economy that is richer than anything prior.
Just to be clear, you can build your own telescope now and see the incoming spacecraft.
Right now you can go task GPT-4 with solving a problem about equal to undergrad physics, let it use plugins, and it will generally get it done. It’s real.
Maybe this is the end of the improvements, just like maybe the aliens will not actually enter orbit around earth.
My experience in research indicates to me that figuring shit out is hard and time consuming, and “intelligence” whatever that is has a lot less to do with it than having enough resources and luck. I’m not sure why some super smart digital mind would be able to do science much faster than humans.
That’s right. Eliezer’s LSD vision of the future where a smart enough AI just figures it all out with no new data is false.
However, you could…build a fuckton of robots. Have those robots do experiments for you. You decide on the experiments, probably using a procedural formula. For example you might try a million variations of wing design, or a million molecules that bind to a target protein, and so on. Humans already do this actually in those domains, this is just extending it.