Avatar

BrickedKeyboard

BrickedKeyboard@awful.systems
Joined
1 posts • 43 comments
Direct message

Consider a flying saucer cult. Clearly a cult, great leader, mothership coming to pick everyone up, things will be great.

…What if telescopes show a large object decelerating into the solar system, the flaw from the matter annihilation engine clearly visible. You can go pay $20 a month and rent a telescope and see the flare.

The cult uh points out their “sequences” of writings by the Great Leader and some stuff is lining up with the imminent arrival of this interstellar vehicle.

My point is that lesswrong knew about GPT-3 years before the mainstream found it, many OpenAI employees post there etc. If the imminent arrival of AI is fake - like the hyped idea of bitcoin going to infinity or replacing real currency, or NFTs - that would be one thing. But I mean, pay $20 a month and man this tool seems to be smart, what could it do if it could learn from it’s mistakes and had the vision module deployed…

Oh and I guess the other plot twist in this analogy : the Great Leader is saying the incoming alien vehicle will kill everyone, tearing up his own Sequences of rants, and that’s actually not a totally unreasonable outcome if you could see an alien spacecraft approaching earth.

And he’s saying to do stupid stuff like nuke each other so the aliens will go away and other unhinged rants, and his followers are eating it up.

permalink
report
parent
reply

It would be lesswrongness.

Just to split where the gap is :

  1. lesswrongers think powerful AGI systems that can act on their own against humans will soon exist, and will be able to escape to the internet.
  2. I work in AI and think powerful general AI systems (not necessarily the same as AGI) will exist soon and be powerful, but if built well will be unable to act against humans without orders, and unable to escape or do many of the things lesswrongers claim.
  3. You believe AGI of any flavor is a very long way away, beyond your remaining lifespan?
permalink
report
parent
reply

Hi David. Reason I dropped by was the whole concept of knowing the distant future with too much certainty seemed like a deep flaw, and I have noticed lesswrong itself is full of nothing but ‘cultist’ AI doomers. Everyone kinda parrots a narrow range of conclusions, mainly on the imminent AGI killing everyone, and this, ironically, doesn’t seem very rational…

I actually work on the architecture for current production AI systems and whenever I mention approaches that do work fine and suggest we could control more powerful AI this way, I get downvoted. So I was trying to differentiate between:

A. This is a club of smart people, even smarter than lesswrongers who can’t see the flaws!

B. This is a club of well, the reason I called it boomers was I felt that the current news and AI papers make each of the questions I asked a reasonable and conservative outcome. For example posters here are saying for (1), “no it won’t do 25% of the jobs”. That was not the question, it was 25% of the tasks. Since for example Copilot already writes about 25% of my code, and GPT-4 helps me with emails to my boss, from my perspective this is reasonable. The rest of the questions build on (1).

permalink
report
parent
reply