OpenAI was working on advanced model so powerful it alarmed staff::Reports say new model Q* fuelled safety fears, with workers airing their concerns to the board before CEO Sam Altman’s sacking
My name’s gpt. Chat gpt. M, what are my orders? Ok I’ll go see Q.
The sensationalized headline aside, I wish people would stop being so dismissive about reports of advancement here. Nobody but those at the fringes are freaking out about sentience, and there are plenty of domains where small improvements in the models can fuck up big parts of our defense/privacy/infrastructure if they end up being successful. It really doesn’t matter if computers have subjective experience, if that computer is able to decrypt AES-192 or identify keystrokes from an audio recording.
We need to be talking about what happens after AI becomes competent at even a handful of tasks, and it really doesn’t inspire confidence if every bit of news is received with a “LOL computers aren’t conscious GTFO”.
That’s why I hate when people retort “GPT isn’t even that smart, it’s just an LLM.” Like yeah, the machines being malevolent is not what I’m worried about, it’s the incompetent and malicious humans behind them. Everything from scam mail to propaganda to law enforcement is testing the water with these “not so smart” models and getting incredible (for them) results. Misinformation is going to be an even bigger problem when it’s so hard to know what to believe.
Also “Yeah what are people’s minds really?”. The fact that we cannot really categorize our own minds doesn’t really mean that we’re forever superior to any categorized AI model. The mere fact that right now that bleeding edge is called an LLM doesn’t mean that it cannot fuck with us - especially if it is an even more powerful one in the future.
There’s a huge discrepency between the scary warnings about Q* calling it the lead-up to artificial superintelligence, and the actual discussion of the capabilities of Q* (it is good-enough at logic to solve some math problems).
My theory: the actual capabilities of Q* are perfectly nice and useful and unfrightening… but somebody pointed out the obvious: Q* can write code.
Either
-
“Q* is gonna take my job!”
-
“As we enhance Q*, it’s going to get better at writing code… and we’ll use Q* to write our AI code. This thing might not be our hypothetical digital God, but it might make it.”
Nah. Programming is… really hard to automate, and machine learning more so. The actual programming for it is pretty straightforward, but to make anything useful you need to get training data, clean it, and design a structure, which is much too general for an LLM.
Programming is like 10% writing code and 90% managing client expectations in my small experience.
But a lot of the crap you have to do only exists because projects are large enough to require multiple separate teams, so you get all the overhead of communication between the teams, etc.
If the task gets simple enough that a single person can manage it, a lot of the coordination overhead will disappear too.
In the end though, people may find out that the entire product, that they are trying to develop using automation, is no longer relevant anyway.
Programming is 10% writing code, 80% being up at 3 in the morning wondering whY THE FUCKING CODE WON’T RUN CORRECTLY (it was a typo that you missed despite looking at it over 10 times), and 10% managing expectations
It’s possible it’s related to the Q* function from Q-learning, a strategy used in deep reinforcement learning!
… or this is the origin of the Q and we’re all fucked. I find my hypothesis much more plausible.
Pure propaganda. The only safety fears anyone in the industry is going to have is if a model is telling people to kill themselves or each other. But by saying that, The uneducated public is going to assume it’s skynet.
The only safety fears (…) people to kill themselves (…)
Hu ? The following might be worse :
The Pentagon is moving toward letting AI weapons autonomously decide to kill humans
https://lemmy.world/post/8715340
Why must it always be propaganda in the Fediverse? Why can’t it be a more sensible take like sensationalization? Not everything is out to get you, sometimes a desperate news site just wants a click or a reader.
Sensationalization infers that it happened and media turned it into misunderstood clickbait.
If the company designed the PR stunt and executed the PR stunt that would be propaganda.
There’s literally no proof for the latter and the former is a lot more reasonable. I don’t understand this need to jump to conclusions and call everything propaganda like it’s a trump card.
I’m so burnt out on OpenAI ‘news’. Can we get something substantial at some point?