OpenAI was working on advanced model so powerful it alarmed staff::Reports say new model Q* fuelled safety fears, with workers airing their concerns to the board before CEO Sam Altman’s sacking
This is the best summary I could come up with:
OpenAI was reportedly working on an advanced system before Sam Altman’s sacking that was so powerful it caused safety concerns among staff at the company.
The artificial intelligence model triggered such alarm with some OpenAI researchers that they wrote to the board of directors before Altman’s dismissal warning it could threaten humanity, Reuters reported.
The model, called Q* – and pronounced as “Q-Star” – was able to solve basic maths problems it had not seen before, according to the tech news site the Information, which added that the pace of development behind the system had alarmed some safety researchers.
The reports followed days of turmoil at San Francisco-based OpenAI, whose board sacked Altman last Friday but then reinstated him on Tuesday night after nearly all the company’s 750 staff threatened to resign if he was not brought back.
As part of the agreement in principle for Altman’s return, OpenAI will have a new board chaired by Bret Taylor, a former co-chief executive of software company Salesforce.
However, his brief successor as interim chief executive, Emmett Shear, wrote this week that the board “did not remove Sam over any specific disagreement on safety”.
The original article contains 504 words, the summary contains 192 words. Saved 62%. I’m a bot and I’m open source!
So staff requested the board take action, then those same staff threatened to quit because the board took action?
That doesn’t add up.
The whole thing sounds like some cockamamie plot derived from chatgpt itself. Corporate America is completely detached from the real world.
That’s an appealing ‘conspiracy’ angle, and I understand why it might seem juicy and tantalising to onlookers, but that idea doesn’t hold up to any real scrutiny whatsoever.
Why would the Board willingly trash their reputation? Why would they drag the former Twitch CEO through the mud and make him look weak and powerless? Why would they not warn Microsoft and risk damaging that relationship? Why would they let MS strike a tentative agreement with the OpenAI employees that upsets their own staff, only to then undo it?
None of that makes any sense whatsoever from a strategic, corporate “planned” perspective. They are all actions of people who are reacting to things in the heat of the moment and are panicking because they don’t know how it will end.
There’s clearly a good amount fog around this. But something that is clearly true is that at least some OpenAI people have behaved poorly. Altman, the board, some employees, the mainstream of the employees or maybe all of them in some way or another.
What we know about the employees was the petition which had ~90% sign it. Many were quick to point out the weird peer pressure that was likely around that petition. Amongst all that, some employees being alarmed about the new AI to the board or other higher ups is perfectly plausible. Either they were also unhappy with the poorly managed Altman sacking, never signed the petition or did so while really not wanting Altman back that much.
OpenAI loves to “leak” stories about how they’ve developed an AI so good that it is scaring engineers because it makes people believe they’ve made a massive new technological breakthrough.
More like:
-
They get a breakthrough called Q* (Q star) which is just combining 2 things we already knew about.
-
Chief scientist dude tells the board Sam has plans for it already
-
Board says Sam is going too fast with his “breakthroughs” and fires him.
-
Original scientist who raised the flag realized his mistake and started supporting Sam but damage was done
-
Microsoft
My bet is the board freaked out at how “powerful” they heard it was (which is still unfounded and from what they explain in various articles, Q* is not very groundbreaking) and jumped the gun. So now everyone wants them to resign because they’ve shown they’ll take drastic actions without asking on things they don’t understand.
Allegedly. And no proof was presented. The letter cited was nowhere to be found.
So did anyone interview the sister? No? Hm.
I read that blog post about her. I thought it all sounded pretty wild - perhaps credible and perhaps not. If she really believes what she says, she needs to hire a lawyer and file charges if she thinks there’s a crime.
Instead she’s making these periodic accusatory posts containing big claims without evidence, which serves more as an antagonizer/libelous effort than one to achieve justice. It doesn’t rise to the standard we expect to find guilt.
I’m so burnt out on OpenAI ‘news’. Can we get something substantial at some point?