In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology more expensive, in order to make it more accurate.
But that’s not the AI business model. AI pitchmen are explicit on this score: The purpose of AI, the source of its value, is its capacity to increase productivity, which is to say, it should allow workers to do more, which will allow their bosses to fire some of them, or get each one to do more work in the same time, or both. The entire investor case for AI is “companies will buy our products so they can do more with less.” It’s not “business customers will buy our products so their products will cost more to make, but will be of higher quality.”
AI tools like this should really be viewed as a calculator. Helpful for speeding up analysis, but you still require an expert to sign off.
Honestly anything they are used for should be validated by someone with a brain.
You don’t need AI for this and it’s probably Not using “AI”
Also in other Countries there is No bullshit Separation between Nurses and “Techs”
What they’re describing is the kind of thing where the “last-gen” iteration/definition of AI, as in pretrained neural networks, are very applicable - taking in various vitals as inputs and outputting a value for if it should be alarming. For simple things you don’t need any of that, but if you want to be detecting more subtle signs to give an early warning, it can be really difficult to manually write logic for that, while machine learning can potentially catch cases you didn’t even think of.
Ideally, yeah - people would review and decide first, then check if the AI opinion confers.
We all know that’s just not how things go in a professional setting.
Anyone, including me, is just going to skip to the end and see what the AI says, and consider whether it’s reasonable. Then spend the alotted time goofing off.
Obviously this is not how things ought to be, but it’s how things have been every time some new tech improves productivity.
I mean… duh? The purpose of an LLM is to map words to meanings… to derive what a human intends from what they say. That’s it. That’s all.
It’s not a logic tool or a fact regurgitator. It’s a context interpretation engine.
The real flaw is that people expect that because it can sometimes (more than past attempts) understand what you mean, it is capable of reasoning.
Not even that. LLMs have no concept of meaning or understanding. What they do in essence is space filling based on previously trained patterns.
Like showing a bunch of shapes to someone, then drawing a few lines and asking them to complete the shape. And all the shapes are lamp posts but you haven’t told them that and they have no idea what a lamp post is. They will just produce results like the shapes you’ve shown them, which generally end up looking like lamp posts.
Except the “shape” in this case is a sentence or poem or self insert erotic fan fiction, none of which an LLM “understands”, it just matches the shape of what’s been written so far with previous patterns and extrapolates.
Well yes… I think that’s essentially what I’m saying.
It’s debatable whether our own brains really operate any differently. For instance, if I say the word “lamppost”, your brain determines the meaning of that word based on the context of my other words around “lamppost” and also all of your past experiences that are connected with that word - because we use past context to interpret present experience.
In an abstract, nontechnical way, training a machine learning model on a corpus of data is sort of like trying to give it enough context to interpret new inputs in an expected/useful way. In the case of LLMs, it’s an attempt to link the use of words and phrases with contextual meanings so that a computer system can interact with natural human language (rather than specifically prepared and formatted language like programming).
It’s all just statistics though. The interpretation is based on ingestion of lots of contextual uses. It can’t really understand… it has nothing to understand with. All it can do is associate newly input words with generalized contextual meanings based on probabilities.
plenty of people can’t reason either. the current state of AI is closer to us than we’d like to admit.
They just want to make an economy they don’t have to pay anyone to profit from. That’s why slavery became Jim Crow became migrant labor and with modernity came work visa servitude to exploit high skilled laborers.
The owners will make sure they always have concierge service with human beings as part of upgraded service, like they do now with concierge medicine. They don’t personally suffer approvals for care. They profit from denying their livestock’s care.
Meanwhile we, their capital battery livestock property, will be yelling at robots about refilling our prescription as they hallucinate and start singing happy birthday to us.
We could fight back, but that would require fighting the right war against the right people and not letting them distract us with subordinate culture battles against one another. Those are booby traps laid between us and them by them.
Only one man, a traitor to his own class no less, has dealt them so much as a glancing blow, while we battle one another about one of the dozens of social wedges the owners stoke through their for profit megaphones. “Women hate men! Christians hate atheists! Poor hate more poor! Terfs hate trans! Color hate color! 2nd Gen immigrants hate 1st Gen immigrants!” On and on and on and on as we ALL suffer less housing, less food, less basic needs being met. Stop it. Common enemy. Meaningful Shareholders.
And if you think your little 401k makes you a meaningful shareholder, please just go sit down and have a juice box, the situation is beyond you and you either can’t or refuse to understand it.
You know, OpenAI published a paper in 2020 modelling how far they were from human language error rate and it correctly predicted the accuracy of GPT 4. Deepmind also published a study in 2023 with the same metrics and discovered that even with infinite training and power it would still never break 1.69% error rate.
These companies knew that their basic model was failing and that overfitying trashed their models.
Sam Altman and all these other fuckers knew, they’ve always known, that their LLMs would never function perfectly. They’re convincing all the idiots on earth that they’re selling an AGI prototype while they already know that it’s a dead-end.
As far as I know, the Deepmind paper was actually a challenge of the OpenAI paper, suggesting that models are undertrained and underperform while using too much compute due to this. They tested a model with 70B params and were able to outperform much larger models while using less compute by introducing more training. I don’t think there can be any general conclusion about some hard ceiling for LLM performance drawn from this.
However, this does not change the fact that there are areas (ones that rely on correctness) that simply cannot be replaced by this kind of model, and it is a foolish pursuit.
Human hardware is pretty impressive, might need to move on from binary computers to emulate it efficiently.
Does it rat out CEO hunters though?
That’s probably it’s primary function. That and maximizing profits through charging flex pricing based on who’s the biggest sucker.