12 points

No link or anything, very believable.

permalink
report
reply
17 points
8 points

Complain to who? Some random twitter account? WHy would I do that?

permalink
report
parent
reply
12 points

No, here. You could asked for a link or Google.

permalink
report
parent
reply
7 points

Honestly this is a pretty good use case for LLMs and I’ve seen them used very successfully to detect infection in samples for various neglected tropical diseases. This literally is what AI should be used for.

permalink
report
parent
reply
4 points

Sure, agreed . Too bad 99% of it’s use is still stealing from society to make a few billionaires richer.

permalink
report
parent
reply
5 points

I also agree.

However these medical LLMs have been around for a long time, and don’t use horrific amounts of energy, not do they make billionaires richer. They are the sorts of things that a hobbiest can put together provided they have enough training data. Further to that they can run offline, allowing doctors to perform tests in the field, as I can attest to witnessing first hand with soil transmitted helminths surveys in Mozambique. That means that instead of checking thousands of stool samples manually, those same people can be paid to collect more samples or distribute the drugs to cure the disease in affected populations.

permalink
report
parent
reply
0 points

You don’t understand how they work and that’s fine, you’re upset based on your paranoid guesswork thats filled in the lack of understanding and that’s sad.

No one is stealing from society, ‘society’ isn’t being deprived of anything when ai looks at an image. The research is pretty open, humanity is benefitting from it in the same way Tesla, Westi ghouse and Edison benefitted the history of electrical research.

And yes I’d you’re about to tell me Edison did nothing but steal then this is another bit of tech history you’ve not paid attention to beyond memes.

The big companies you hate like meta or nvidia are producing papers that explain methods, you can follow along at home and make your own model - though with those examples you don’t need to because they’ve released models on open licenses. Ironically it seems likely you don’t understand how this all works or what’s happening because zuck is doing significantly more to help society than you are - Ironic, hu?

And before you tell me about zuck doing genocide or other childish arguments, we’re on lemmy which was purposefully designed to remove the power from a top down authority so if an instance pushed for genocide we would have zero power to stop it - the report you’re no doubt going go allude to says that Facebook is culpable because it did not have adequate systems in place to control locally run groups…

I could make good arguments against zuck, I don’t think anyone should be able to be that rich but it’s funny to me when a group freely shares pytorch and other key tools used to help do things like detect cancer cheaply and efficient, help impoverished communities access education and health resources in their local language, help blind people have independence, etc, etc, all the many positive uses for ai - but you shit on it all simply because you’re too lazy and selfish to actually do anything materially constructive to help anyone or anything that doesn’t directly benefit you.

permalink
report
parent
reply
1 point

These models aren’t LLM based.

permalink
report
parent
reply
10 points

Can’t pigeons do the same thing?

permalink
report
reply
209 points
Deleted by creator
permalink
report
reply
127 points
*

Using AI for anomaly detection is nothing new though. Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.

permalink
report
parent
reply
77 points

That’s why I hate the term AI. Say it is a predictive llm or a pattern recognition model.

permalink
report
parent
reply
23 points

The correct term is “Computational Statistics”

permalink
report
parent
reply
-3 points

it’s a good term, it refers to lots of thinks. there are many terms like that.

permalink
report
parent
reply
65 points

Say it is a predictive llm

According to the paper cited by the article OP posted, there is no LLM in the model. If I read it correctly, the paper says that it uses PyTorch’s implementation of ResNet18, a deep convolutional neural network that isn’t specifically designed to work on text. So this term would be inaccurate.

or a pattern recognition model.

Much better term IMO, especially since it uses a convolutional network. But since the article is a news publication, not a serious academic paper, the author knows the term “AI” gets clicks and positive impressions (which is what their job actually is) and we wouldn’t be here talking about it.

permalink
report
parent
reply
11 points

Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.

From the conclusion of the actual paper:

Deep learning models that use full-field mammograms yield substantially improved risk discrimination compared with the Tyrer-Cuzick (version 8) model.

If I read this paper correctly, the novelty is in the model, which is a deep learning model that works on mammogram images + traditional risk factors.

permalink
report
parent
reply
7 points
*

For the image-only DL model, we implemented a deep convolutional neural network (ResNet18 [13]) with PyTorch (version 0.31; pytorch.org). Given a 1664 × 2048 pixel view of a breast, the DL model was trained to predict whether or not that breast would develop breast cancer within 5 years.

The only “innovation” here is feeding full view mammograms to a ResNet18(2016 model). The traditional risk factors regression is nothing special (barely machine learning). They don’t go in depth about how they combine the two for the hybrid model, so it’s probably safe to assume it is something simple (merely combining the results, so nothing special in the training step). edit: I stand corrected, commenter below pointed out the appendix, and the regression does in fact come into play in the training step

As a different commenter mentioned, the data collection is largely the interesting part here.

I’ll admit I was wrong about my first guess as to the network topology used though, I was thinking they used something like auto encoders (but that is mostly used in cases where examples of bad samples are rare)

permalink
report
parent
reply
3 points

I skimmed the paper. As you said, they made a ML model that takes images and traditional risk factors (TCv8).

I would love to see comparison against risk factors + human image evaluation.

Nevertheless, this is the AI that will really help humanity.

permalink
report
parent
reply
18 points

It’s really difficult to clean those data. Another case was, when they kept the markings on the training data and the result was, those who had cancer, had a doctors signature on it, so the AI could always tell the cancer from the not cancer images, going by the lack of signature. However, these people also get smarter in picking their training data, so it’s not impossible to work properly at some point.

permalink
report
parent
reply
2 points

That’s the nice thing about machine learning, as it sees nothing but something that correlates. That’s why data science is such a complex topic, as you do not see errors this easily. Testing a model is still very underrated and usually there is no time to properly test a model.

permalink
report
parent
reply
14 points

Citation please?

permalink
report
parent
reply
118 points

Why do I still have to work my boring job while AI gets to create art and look at boobs?

permalink
report
reply
52 points

Because life is suffering and machines dream of electric sheeps.

permalink
report
parent
reply
18 points

I’ve seen things you people wouldn’t believe.

permalink
report
parent
reply
2 points

I dream of boobs.

permalink
report
parent
reply
7 points

Ductal carcinoma in situ (DCIS) is a type of preinvasive tumor that sometimes progresses to a highly deadly form of breast cancer. It accounts for about 25 percent of all breast cancer diagnoses.

Because it is difficult for clinicians to determine the type and stage of DCIS, patients with DCIS are often overtreated. To address this, an interdisciplinary team of researchers from MIT and ETH Zurich developed an AI model that can identify the different stages of DCIS from a cheap and easy-to-obtain breast tissue image. Their model shows that both the state and arrangement of cells in a tissue sample are important for determining the stage of DCIS.

https://news.mit.edu/2024/ai-model-identifies-certain-breast-tumor-stages-0722

permalink
report
reply
4 points
*

How soon could this diagnostic tool be rolled out? It sounds very promising given the seriousness of the DCIS!

permalink
report
parent
reply
-1 points

As soon as your hospital system is willing to pay big money for it.

permalink
report
parent
reply
1 point

Anything medical is slow but tools like this tend to get used by doctors not patients so it’s much easier

permalink
report
parent
reply

Science Memes

!science_memes@mander.xyz

Create post

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don’t throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.


Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

Community stats

  • 13K

    Monthly active users

  • 2.8K

    Posts

  • 68K

    Comments