Avatar

BrickedKeyboard

BrickedKeyboard@awful.systems
Joined
1 posts • 43 comments
Direct message

Did this happen with Amazon? The VC money is a catalyst. It’s advancing money for a share of future revenues. If AI companies can establish a genuine business that collects revenue from customers they can reinvest some of that money into improving the model and so on.

OpenAI specifically seems to have needed about 5 months to go to 1 billion USD annual revenue, or the way tech companies are valued, it’s already worth more than 10 billion intrinsic value.

If they can’t - if the AI models remain too stupid to pay for, then obviously there will be another AI winter.

https://fortune.com/2023/08/30/chatgpt-creator-openai-earnings-80-million-a-month-1-billion-annual-revenue-540-million-loss-sam-altman/

permalink
report
parent
reply

I agree completely. This is exactly where I break with Eliezer’s model. Yes obviously an AI system that can self improve can only do so until it’s either (1) the best algorithm that can run on the server farm (2) finding a better algorithm takes more compute than is worth the investment in current compute

That’s not a god. You do this in an AI experiment now and it might crap out at double or less the starting performance and not even be above the SOTA.

But if robots can build robots, and the current AI progress shows a way to do it (foundation model on human tool manipulation), then…

Genuinely asking, I don’t think it’s “religion” to suggest that a huge speedup in global GDP would be a dramatic event.

permalink
report
parent
reply

Current the global economy doubles every 23 years. Robots building robots and robot making equipment can probably double faster than that. It won’t be in a week or a month, energy requirements alone limit how fast it can happen.

Suppose the doubling time is 5 years, just to put a number on it. So the economy would be growing a bit over 16 times faster than it was previously. This continues until the solar system runs out of matter.

Is this a relevant event? Does it qualify as a singularity? Genuinely asking, how have you “priced in” this possibility in your world view?

permalink
report
parent
reply

I wanted to know what you know and I don’t. If rationalists are all scammers and not genuinely trying to be, per the name ‘lesswrong’ in their view of reality, what’s your model of reality. What do you know? So far unfortunately I haven’t seen anything. Sneer club’s “reality model” seems to be “whatever the mainstream average person knows + 1 physicist”, and it exists to make fun of the mistakes of rationalists and I assume ignores any successes if there are any.

Which is fine, I guess? Mainstream knowledge is probably usually correct. It’s just that I already know it, there’s nothing to be learned here.

permalink
report
parent
reply

This pattern shows up often when people are trying to criticize tesla or spaceX. And yeah, if you measure “current reality” vs “promises of their hype man/lead shitposter and internet troll”, absolutely. Tesla probably will never achieve full self driving using anything like their current approach. But if you compare Tesla “to other automakers, “to most automakers that ever existed””, or SpaceX to “any rocket company since 1970” there’s no comparison. If you’re going to compare the internet to pre-internet, compare it to BBS you would access via modem or fax machines or libraries. No comparison.

Similarly you should compare GPT-4 and the next large model to be released, Gemini, vs all AI software for all time. It’s no comparison.

permalink
report
parent
reply

take some time and read this

I read it. I appreciated the point that human perception of current AI performance can scam us, though this is nothing new. People were fooled by Eliza.

It’s a weak argument though. For causing an AI singularity, functional intelligence is the relevant parameter. Functional intelligence just means “if the machine is given a task, what is the probability it completes the task successfully”. Theoretically an infinite chinese room can have functional intelligence (the machine just looks up the sequence of steps for any given task).

People have benchmarked GPT-4 and it’s got general functional intelligence at tasks that can be done on a computer. You can also just go pay up $20 a month and try it. It’s below human level overall I think, but still surprisingly strong given it’s emergent behavior from computing tokens.

permalink
report
parent
reply

I appreciated this post because it never occurred to me that the “thumb might be on the scales” for the “rules for discourse” that seems to be the norm around the rat forms. I personally ignore most of it, however, the “ES” rat phrase is simply saying, “I know we humans are biased observers, this is where I’m coming from”. If the topic were renewable energy and I was the ‘head of extraction at BP’, you can expect that whatever I have to say is probably biased against renewable energy.

My other thought reading this was : what about the truth. Maybe the mainstream is correct about everything. “Sneer club” seems to be mostly mainstream opinions. That’s fine I guess but the mainstream is sometimes wrong about issues that have been poorly examined or near future events. The collective opinions of everyone don’t really price in things that are about to happen, even if it’s obvious to experts. For example, the mainstream opinion on covid was usually lagging several weeks behind Zvi’s posts on lesswrong.

Where I am going with this is you can point out bad arguments on my part, but I mean in the end, does truth matter? Like are we here to score points on each other or share what we think reality is or will in the very near future be?

permalink
report
parent
reply

To be clear, maybe you will be unimpressed with this, scale matters. I said in the above text “10 times current industrial output. Within 17 years RMR, robots making robots.”. If you already priced that in, ok, that’s an acceptable position, but the magnitude of a singularity matters, not just that it’s happening.

permalink
report
parent
reply

And just to be clear, for one to be “lost in the AI religion”, the claims have to be false, correct? We will not see the things I mentioned within the timeframe I gave (7 years, 17 years, and implicitly if there is not immediate progress towards the nearer deadline within 1 year it’s not going to happen).

Google’s Gemini will not be multimodal, be capable of learning to do tasks by reinforcement learning to human level, right? Robotics foundation models will not work.

permalink
report
parent
reply