OpenAI was working on advanced model so powerful it alarmed staff::Reports say new model Q* fuelled safety fears, with workers airing their concerns to the board before CEO Sam Altman’s sacking

26 points

Pure propaganda. The only safety fears anyone in the industry is going to have is if a model is telling people to kill themselves or each other. But by saying that, The uneducated public is going to assume it’s skynet.

permalink
report
reply
13 points
*

Why must it always be propaganda in the Fediverse? Why can’t it be a more sensible take like sensationalization? Not everything is out to get you, sometimes a desperate news site just wants a click or a reader.

permalink
report
parent
reply
1 point
*

Because the political zeitgeist here is dominated by edgy teenagers who still see the world as something done to them instead of something they are doing. It’s extremely obvious if you’ve been through that phase of life already.

permalink
report
parent
reply
2 points

Sensationalization infers that it happened and media turned it into misunderstood clickbait.

If the company designed the PR stunt and executed the PR stunt that would be propaganda.

permalink
report
parent
reply
1 point

There’s literally no proof for the latter and the former is a lot more reasonable. I don’t understand this need to jump to conclusions and call everything propaganda like it’s a trump card.

permalink
report
parent
reply
2 points

The only safety fears (…) people to kill themselves (…)

Hu ? The following might be worse :
The Pentagon is moving toward letting AI weapons autonomously decide to kill humans
https://lemmy.world/post/8715340

permalink
report
parent
reply
1 point

Not related what so ever to llm

permalink
report
parent
reply
-2 points

So did anyone interview the sister? No? Hm.

permalink
report
reply
6 points
*

I read that blog post about her. I thought it all sounded pretty wild - perhaps credible and perhaps not. If she really believes what she says, she needs to hire a lawyer and file charges if she thinks there’s a crime.

Instead she’s making these periodic accusatory posts containing big claims without evidence, which serves more as an antagonizer/libelous effort than one to achieve justice. It doesn’t rise to the standard we expect to find guilt.

permalink
report
parent
reply
71 points

I’m so burnt out on OpenAI ‘news’. Can we get something substantial at some point?

permalink
report
reply
25 points

AI, Twatter, Tesla - there’s hardly anything else in this community… :(

permalink
report
parent
reply
5 points

It’s just such a relief that you’re doing your daily best to post the content we all so clearly need from this community. I’ve been meaning to thank you for your hard work.

permalink
report
parent
reply
-1 points

Trump 🙄

permalink
report
parent
reply
39 points

The sensationalized headline aside, I wish people would stop being so dismissive about reports of advancement here. Nobody but those at the fringes are freaking out about sentience, and there are plenty of domains where small improvements in the models can fuck up big parts of our defense/privacy/infrastructure if they end up being successful. It really doesn’t matter if computers have subjective experience, if that computer is able to decrypt AES-192 or identify keystrokes from an audio recording.

We need to be talking about what happens after AI becomes competent at even a handful of tasks, and it really doesn’t inspire confidence if every bit of news is received with a “LOL computers aren’t conscious GTFO”.

permalink
report
reply
15 points
*

That’s why I hate when people retort “GPT isn’t even that smart, it’s just an LLM.” Like yeah, the machines being malevolent is not what I’m worried about, it’s the incompetent and malicious humans behind them. Everything from scam mail to propaganda to law enforcement is testing the water with these “not so smart” models and getting incredible (for them) results. Misinformation is going to be an even bigger problem when it’s so hard to know what to believe.

permalink
report
parent
reply
5 points

I’m even more afraid of the competent evil people

permalink
report
parent
reply
4 points

Also “Yeah what are people’s minds really?”. The fact that we cannot really categorize our own minds doesn’t really mean that we’re forever superior to any categorized AI model. The mere fact that right now that bleeding edge is called an LLM doesn’t mean that it cannot fuck with us - especially if it is an even more powerful one in the future.

permalink
report
parent
reply
123 points

So staff requested the board take action, then those same staff threatened to quit because the board took action?

That doesn’t add up.

permalink
report
reply
53 points

OpenAI loves to “leak” stories about how they’ve developed an AI so good that it is scaring engineers because it makes people believe they’ve made a massive new technological breakthrough.

permalink
report
parent
reply
12 points

Meanwhile anyone who works tech immediately thinks “some csuite dickhead just greenlit ED-209”

permalink
report
parent
reply
14 points

There’s clearly a good amount fog around this. But something that is clearly true is that at least some OpenAI people have behaved poorly. Altman, the board, some employees, the mainstream of the employees or maybe all of them in some way or another.

What we know about the employees was the petition which had ~90% sign it. Many were quick to point out the weird peer pressure that was likely around that petition. Amongst all that, some employees being alarmed about the new AI to the board or other higher ups is perfectly plausible. Either they were also unhappy with the poorly managed Altman sacking, never signed the petition or did so while really not wanting Altman back that much.

permalink
report
parent
reply
110 points

The whole thing sounds like some cockamamie plot derived from chatgpt itself. Corporate America is completely detached from the real world.

permalink
report
parent
reply
30 points
*

That’s exactly what it is. A ploy for free attention and it’s working.

permalink
report
parent
reply
-4 points

Sound more like a ploy to become a fully for proffit get rid of the not for priffit board

permalink
report
parent
reply
10 points

There’s no way this was a “ploy”.

permalink
report
parent
reply
7 points
*

That’s an appealing ‘conspiracy’ angle, and I understand why it might seem juicy and tantalising to onlookers, but that idea doesn’t hold up to any real scrutiny whatsoever.

Why would the Board willingly trash their reputation? Why would they drag the former Twitch CEO through the mud and make him look weak and powerless? Why would they not warn Microsoft and risk damaging that relationship? Why would they let MS strike a tentative agreement with the OpenAI employees that upsets their own staff, only to then undo it?

None of that makes any sense whatsoever from a strategic, corporate “planned” perspective. They are all actions of people who are reacting to things in the heat of the moment and are panicking because they don’t know how it will end.

permalink
report
parent
reply
1 point

Why would they want attention not a publicly traded company?

permalink
report
parent
reply
28 points
*

More like:

  • They get a breakthrough called Q* (Q star) which is just combining 2 things we already knew about.

  • Chief scientist dude tells the board Sam has plans for it already

  • Board says Sam is going too fast with his “breakthroughs” and fires him.

  • Original scientist who raised the flag realized his mistake and started supporting Sam but damage was done

  • Microsoft

My bet is the board freaked out at how “powerful” they heard it was (which is still unfounded and from what they explain in various articles, Q* is not very groundbreaking) and jumped the gun. So now everyone wants them to resign because they’ve shown they’ll take drastic actions without asking on things they don’t understand.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 12K

    Posts

  • 554K

    Comments