Money wins, every time. They’re not concerned with accidentally destroying humanity with an out-of-control and dangerous AI who has decided “humans are the problem.” (I mean, that’s a little sci-fi anyway, an AGI couldn’t “infect” the entire internet as it currently exists.)

However, it’s very clear that the OpenAI board was correct about Sam Altman, with how quickly him and many employees bailed to join Microsoft directly. If he was so concerned with safeguarding AGI, why not spin up a new non-profit.

Oh, right, because that was just Public Relations horseshit to get his company a head-start in the AI space while fear-mongering about what is an unlikely doomsday scenario.


So, let’s review:

  1. The fear-mongering about AGI was always just that. How could an intelligence that requires massive amounts of CPU, RAM, and database storage even concievably able to leave the confines of its own computing environment? It’s not like it can “hop” onto a consumer computer with a fraction of the same CPU power and somehow still be able to compute at the same level. AI doesn’t have a “body” and even if it did, it could only affect the world as much as a single body could. All these fears about rogue AGI are total misunderstandings of how computing works.

  2. Sam Altman went for fear mongering to temper expectations and to make others fear pursuing AGI themselves. He always knew his end-goal was profit, but like all good modern CEOs, they have to position themselves as somehow caring about humanity when it is clear they could give a living flying fuck about anyone but themselves and how much money they make.

  3. Sam Altman talks shit about Elon Musk and how he “wants to save the world, but only if he’s the one who can save it.” I mean, he’s not wrong, but he’s also projecting a lot here. He’s exactly the fucking same, he claimed only he and his non-profit could “safeguard” AGI and here he’s going to work for a private company because hot damn he never actually gave a shit about safeguarding AGI to begin with. He’s a fucking shit slinging hypocrite of the highest order.

  4. Last, but certainly not least. Annie Altman, Sam Altman’s younger, lesser-known sister, has held for a long time that she was sexually abused by her brother. All of these rich people are all Jeffrey Epstein levels of fucked up, which is probably part of why the Epstein investigation got shoved under the rug. You’d think a company like Microsoft would already know this or vet this. They do know, they don’t care, and they’ll only give a shit if the news ends up making a stink about it. That’s how corporations work.

So do other Lemmings agree, or have other thoughts on this?


And one final point for the right-wing cranks: Not being able to make an LLM say fucked up racist things isn’t the kind of safeguarding they were ever talking about with AGI, so please stop conflating “safeguarding AGI” with “preventing abusive racist assholes from abusing our service.” They aren’t safeguarding AGI when they prevent you from making GPT-4 spit out racial slurs or other horrible nonsense. They’re safeguarding their service from loser ass chucklefucks like you.

84 points

40+ years on this planet have made me 100% certain that no one with the power to safeguard AGI will make any legitimate effort to do so. Just like we have companies spending millions greenwashing while they pollute more than ever, we’ll have plenty of lip-service about it but never anything useful.

permalink
report
reply
18 points

Anyone who thinks America or your local government is going to regulate AI are delusional, especially in the face of companies planning to build AI Data Centers on ships and float them into International waters where the law does not apply to them. If not there,they will put it in space. Unregulated AI is coming where you like it or not, unless we destroy the entire planet which I would not rule out. Sure this commenter would agree on that.

permalink
report
parent
reply
5 points

An AI data center acting as a rogue state will just be sunk the moment they actually become a legitimate problem.

permalink
report
parent
reply
6 points

And billionaires will pay their fair share of taxes.

permalink
report
parent
reply
2 points

Depends on how much money and power they’re entangled with, and who they threaten

permalink
report
parent
reply
3 points

I don’t disagree that the people with money who are funding this kind of development don’t care about regulations or safety.

That said, the idea that they’ll do it out on the open sea or in space are absolutely laughable. Those ideas pitched so far completely ignore all the obvious engineering problems. Not to mention that going to international waters to avoid regulations means that the navy of that country you’re thumbing your nose at now has free reign on you.

permalink
report
parent
reply
2 points
Deleted by creator
permalink
report
parent
reply
1 point

Who is?

permalink
report
parent
reply
4 points
*

My biggest concern with generative AI is all of the CEOs that will eagerly seize the opportunity (and some already have) to fire staff and offload their work onto their remaining employees so they can use ChatGPT to make up for lost productivity. Easy way for them to further line their pockets without increasing pay for anyone else, further dividing the worker/CEO wage disparity and class divide.

permalink
report
parent
reply
1 point

All this noise just to serve up ads

permalink
report
parent
reply
35 points
Removed by mod
permalink
report
reply
5 points

How do you know?

permalink
report
parent
reply
18 points
Removed by mod
permalink
report
parent
reply
4 points

Because I’m smarter than you isn’t really an answer

permalink
report
parent
reply
2 points

Are you able to articulate at least one specific reason that we are nowhere close to developing AGI?

Without any specific reason being stated, I’m tempted to believe you are just confidently declaring this to protect yourself from fear.

permalink
report
parent
reply
3 points

I agree we’re far out, but not as far as you think. Advancements are insane and AGI could be here in 5-10 years. The way the industry have been attempting it the past decade is wrong though, training should be more indepth than images/videos, I think a few are starting to understand how to do more indepth training, so even more progress will start soon

permalink
report
parent
reply
12 points

I think 5-10 years is optimistic given how much hand tuning / manual training has to take place. Given how insanely long it’s taken to get where we are and how many times I’ve heard machine intelligence oversold, and based on what LLMs can do I think we are still many decades out.

That said, what ML and AI can do is still game changing and will still have an impact even if it isn’t some kind of scary skynet AGI thing.

permalink
report
parent
reply
10 points

We’ve been promised self driving cars for over 10 years and still aren’t close, I think we’re a long ways away from AGI.

permalink
report
parent
reply
5 points

I would even argue the only way to get self-driving cars that actually work well is with AGI. I don’t think we’re going to get either in a very long time.

permalink
report
parent
reply
2 points

To be fair, that promise came from someone who is clearly a conman of a swindler. If you ever took that promise seriously… I’m sorry.

permalink
report
parent
reply
8 points

I think you are being optimistic.

If you are old enough to remember AIM chatbots, this current generation is maybe multiple times more advanced, not exponentially so. From what I have seen, all the incredible advancements have been in image production.

This leads me to believe that AGI has never been the true commercial goal, but rather an advancement of propaganda media and its creation.

permalink
report
parent
reply
1 point

This leads me to believe that AGI has never been the true commercial goal, but rather an advancement of propaganda media and its creation.

Uh what? Why wouldn’t it be because text/image generation isn’t even on the same plane of difficulty as AGI?

permalink
report
parent
reply
2 points

Imagine if we had FTL, that would be so cool.

permalink
report
parent
reply
2 points
*

If we actually had AGI I suppose it’s possible we would have FTL.

permalink
report
parent
reply
34 points
*
Deleted by creator
permalink
report
reply
0 points
*

Lets see what nuggets I can find in the post hist- didn’t even need to scroll past the first page.

permalink
report
parent
reply
31 points
*

Somewhere between

A bunch of incapable, spoilt, completely insane men-children with too much money think they can save the world.

and

A bunch of scam artists build an artificial human who they claim can talk and draw and reason just like a real human would.

For the CEOs of this brave new AI world this probably changes depending on their level of hangover and/or midlife crisis.

permalink
report
reply
28 points

It’s common business practice for the first big companies in a new market/industry to create “barriers to entry”. The calls for regulation are exactly that. They don’t care about safety–just money.

permalink
report
reply
6 points

The greed never ends. You’d think companies as big as Microsoft would just be like “maybe we don’t actually need to own everything” but nah. Their sheer size and wealth is enough of a “barrier to entry” as it is.

permalink
report
parent
reply
2 points

Google: “Don’t be evil.”

permalink
report
parent
reply

Asklemmy

!asklemmy@lemmy.ml

Create post

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it’s welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

Icon by @Double_A@discuss.tchncs.de

Community stats

  • 9.6K

    Monthly active users

  • 5.5K

    Posts

  • 301K

    Comments