No link or anything, very believable.
You could participate or complain.
https://news.mit.edu/2019/using-ai-predict-breast-cancer-and-personalize-care-0507
Honestly this is a pretty good use case for LLMs and I’ve seen them used very successfully to detect infection in samples for various neglected tropical diseases. This literally is what AI should be used for.
Sure, agreed . Too bad 99% of it’s use is still stealing from society to make a few billionaires richer.
I also agree.
However these medical LLMs have been around for a long time, and don’t use horrific amounts of energy, not do they make billionaires richer. They are the sorts of things that a hobbiest can put together provided they have enough training data. Further to that they can run offline, allowing doctors to perform tests in the field, as I can attest to witnessing first hand with soil transmitted helminths surveys in Mozambique. That means that instead of checking thousands of stool samples manually, those same people can be paid to collect more samples or distribute the drugs to cure the disease in affected populations.
You don’t understand how they work and that’s fine, you’re upset based on your paranoid guesswork thats filled in the lack of understanding and that’s sad.
No one is stealing from society, ‘society’ isn’t being deprived of anything when ai looks at an image. The research is pretty open, humanity is benefitting from it in the same way Tesla, Westi ghouse and Edison benefitted the history of electrical research.
And yes I’d you’re about to tell me Edison did nothing but steal then this is another bit of tech history you’ve not paid attention to beyond memes.
The big companies you hate like meta or nvidia are producing papers that explain methods, you can follow along at home and make your own model - though with those examples you don’t need to because they’ve released models on open licenses. Ironically it seems likely you don’t understand how this all works or what’s happening because zuck is doing significantly more to help society than you are - Ironic, hu?
And before you tell me about zuck doing genocide or other childish arguments, we’re on lemmy which was purposefully designed to remove the power from a top down authority so if an instance pushed for genocide we would have zero power to stop it - the report you’re no doubt going go allude to says that Facebook is culpable because it did not have adequate systems in place to control locally run groups…
I could make good arguments against zuck, I don’t think anyone should be able to be that rich but it’s funny to me when a group freely shares pytorch and other key tools used to help do things like detect cancer cheaply and efficient, help impoverished communities access education and health resources in their local language, help blind people have independence, etc, etc, all the many positive uses for ai - but you shit on it all simply because you’re too lazy and selfish to actually do anything materially constructive to help anyone or anything that doesn’t directly benefit you.
Can’t pigeons do the same thing?
Using AI for anomaly detection is nothing new though. Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.
That’s why I hate the term AI. Say it is a predictive llm or a pattern recognition model.
it’s a good term, it refers to lots of thinks. there are many terms like that.
Say it is a predictive llm
According to the paper cited by the article OP posted, there is no LLM in the model. If I read it correctly, the paper says that it uses PyTorch’s implementation of ResNet18, a deep convolutional neural network that isn’t specifically designed to work on text. So this term would be inaccurate.
or a pattern recognition model.
Much better term IMO, especially since it uses a convolutional network. But since the article is a news publication, not a serious academic paper, the author knows the term “AI” gets clicks and positive impressions (which is what their job actually is) and we wouldn’t be here talking about it.
Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.
From the conclusion of the actual paper:
Deep learning models that use full-field mammograms yield substantially improved risk discrimination compared with the Tyrer-Cuzick (version 8) model.
If I read this paper correctly, the novelty is in the model, which is a deep learning model that works on mammogram images + traditional risk factors.
For the image-only DL model, we implemented a deep convolutional neural network (ResNet18 [13]) with PyTorch (version 0.31; pytorch.org). Given a 1664 × 2048 pixel view of a breast, the DL model was trained to predict whether or not that breast would develop breast cancer within 5 years.
The only “innovation” here is feeding full view mammograms to a ResNet18(2016 model). The traditional risk factors regression is nothing special (barely machine learning). They don’t go in depth about how they combine the two for the hybrid model, so it’s probably safe to assume it is something simple (merely combining the results, so nothing special in the training step). edit: I stand corrected, commenter below pointed out the appendix, and the regression does in fact come into play in the training step
As a different commenter mentioned, the data collection is largely the interesting part here.
I’ll admit I was wrong about my first guess as to the network topology used though, I was thinking they used something like auto encoders (but that is mostly used in cases where examples of bad samples are rare)
It’s really difficult to clean those data. Another case was, when they kept the markings on the training data and the result was, those who had cancer, had a doctors signature on it, so the AI could always tell the cancer from the not cancer images, going by the lack of signature. However, these people also get smarter in picking their training data, so it’s not impossible to work properly at some point.
That’s the nice thing about machine learning, as it sees nothing but something that correlates. That’s why data science is such a complex topic, as you do not see errors this easily. Testing a model is still very underrated and usually there is no time to properly test a model.
Why do I still have to work my boring job while AI gets to create art and look at boobs?
Ductal carcinoma in situ (DCIS) is a type of preinvasive tumor that sometimes progresses to a highly deadly form of breast cancer. It accounts for about 25 percent of all breast cancer diagnoses.
Because it is difficult for clinicians to determine the type and stage of DCIS, patients with DCIS are often overtreated. To address this, an interdisciplinary team of researchers from MIT and ETH Zurich developed an AI model that can identify the different stages of DCIS from a cheap and easy-to-obtain breast tissue image. Their model shows that both the state and arrangement of cells in a tissue sample are important for determining the stage of DCIS.
https://news.mit.edu/2024/ai-model-identifies-certain-breast-tumor-stages-0722
How soon could this diagnostic tool be rolled out? It sounds very promising given the seriousness of the DCIS!