210 points
Deleted by creator
permalink
report
reply
127 points
*

Using AI for anomaly detection is nothing new though. Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.

permalink
report
parent
reply
77 points

That’s why I hate the term AI. Say it is a predictive llm or a pattern recognition model.

permalink
report
parent
reply
65 points

Say it is a predictive llm

According to the paper cited by the article OP posted, there is no LLM in the model. If I read it correctly, the paper says that it uses PyTorch’s implementation of ResNet18, a deep convolutional neural network that isn’t specifically designed to work on text. So this term would be inaccurate.

or a pattern recognition model.

Much better term IMO, especially since it uses a convolutional network. But since the article is a news publication, not a serious academic paper, the author knows the term “AI” gets clicks and positive impressions (which is what their job actually is) and we wouldn’t be here talking about it.

permalink
report
parent
reply
23 points

The correct term is “Computational Statistics”

permalink
report
parent
reply
-3 points

it’s a good term, it refers to lots of thinks. there are many terms like that.

permalink
report
parent
reply
11 points

Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.

From the conclusion of the actual paper:

Deep learning models that use full-field mammograms yield substantially improved risk discrimination compared with the Tyrer-Cuzick (version 8) model.

If I read this paper correctly, the novelty is in the model, which is a deep learning model that works on mammogram images + traditional risk factors.

permalink
report
parent
reply
7 points
*

For the image-only DL model, we implemented a deep convolutional neural network (ResNet18 [13]) with PyTorch (version 0.31; pytorch.org). Given a 1664 × 2048 pixel view of a breast, the DL model was trained to predict whether or not that breast would develop breast cancer within 5 years.

The only “innovation” here is feeding full view mammograms to a ResNet18(2016 model). The traditional risk factors regression is nothing special (barely machine learning). They don’t go in depth about how they combine the two for the hybrid model, so it’s probably safe to assume it is something simple (merely combining the results, so nothing special in the training step). edit: I stand corrected, commenter below pointed out the appendix, and the regression does in fact come into play in the training step

As a different commenter mentioned, the data collection is largely the interesting part here.

I’ll admit I was wrong about my first guess as to the network topology used though, I was thinking they used something like auto encoders (but that is mostly used in cases where examples of bad samples are rare)

permalink
report
parent
reply
3 points

I skimmed the paper. As you said, they made a ML model that takes images and traditional risk factors (TCv8).

I would love to see comparison against risk factors + human image evaluation.

Nevertheless, this is the AI that will really help humanity.

permalink
report
parent
reply
18 points

It’s really difficult to clean those data. Another case was, when they kept the markings on the training data and the result was, those who had cancer, had a doctors signature on it, so the AI could always tell the cancer from the not cancer images, going by the lack of signature. However, these people also get smarter in picking their training data, so it’s not impossible to work properly at some point.

permalink
report
parent
reply
14 points

Citation please?

permalink
report
parent
reply
2 points

That’s the nice thing about machine learning, as it sees nothing but something that correlates. That’s why data science is such a complex topic, as you do not see errors this easily. Testing a model is still very underrated and usually there is no time to properly test a model.

permalink
report
parent
reply
121 points

Why do I still have to work my boring job while AI gets to create art and look at boobs?

permalink
report
reply
53 points

Because life is suffering and machines dream of electric sheeps.

permalink
report
parent
reply
18 points

I’ve seen things you people wouldn’t believe.

permalink
report
parent
reply
2 points

I dream of boobs.

permalink
report
parent
reply
102 points
*

Now make mammograms not $500 and not have a 6 month waiting time and make them available for women under 40. Then this’ll be a useful breakthrough

permalink
report
reply
77 points

It’s already this way in most of the world.

permalink
report
parent
reply
42 points
*

Oh for sure. I only meant in the US where MIT is located. But it’s already a useful breakthrough for everyone in civilized countries

permalink
report
parent
reply
3 points

For reference here in Australia my wife has been asking to get mammograms for years now (in her 30s) and she keeps getting told she’s too young because she doesn’t have a familial history. That issue is a bit pervasive in countries other than the US.

permalink
report
parent
reply
10 points

Better yet, give us something better to do about the cancer than slash, burn, poison. Something that’s less traumatic on the rest of the person, especially in light of the possibility of false positives.

permalink
report
parent
reply
2 points

Also, flying cars and the quadrature of the circle.

permalink
report
parent
reply
5 points

I think it’s free in most of Europe, or relatively cheap

permalink
report
parent
reply
0 points

Done.

permalink
report
parent
reply
83 points

Unfortunately AI models like this one often never make it to the clinic. The model could be impressive enough to identify 100% of cases that will develop breast cancer. However if it has a false positive rate of say 5% it’s use may actually create more harm than it intends to prevent.

permalink
report
reply
71 points

Another big thing to note, we recently had a different but VERY similar headline about finding typhoid early and was able to point it out more accurately than doctors could.

But when they examined the AI to see what it was doing, it turns out that it was weighing the specs of the machine being used to do the scan… An older machine means the area was likely poorer and therefore more likely to have typhoid. The AI wasn’t pointing out if someone had Typhoid it was just telling you if they were in a rich area or not.

permalink
report
parent
reply
19 points

That’s actually really smart. But that info wasn’t given to doctors examining the scan, so it’s not a fair comparison. It’s a valid diagnostic technique to focus on the particular problems in the local area.

“When you hear hoofbeats, think horses not zebras” (outside of Africa)

permalink
report
parent
reply
8 points

AI is weird. It may not have been given the information explicitly. Instead it could be an artifact in the scan itself due to the different equipment. Like if one scan was lower resolution than the others but you resized all of the scans to be the same size as the lowest one the AI might be picking up on the resizing artifacts which are not present in the lower resolution one.

permalink
report
parent
reply
1 point

That is quite a statement that it still had a better detection rate than doctors.

What is more important, save life or not offend people?

permalink
report
parent
reply
2 points

The thing is tho… It has a better detection rate ON THE SAMPLES THEY HAD but because it wasn’t actually detecting anything other than wealth there was no way for them to trust it would stay accurate.

permalink
report
parent
reply
48 points

That’s why these systems should never be used as the sole decision makers, but instead work as a tool to help the professionals make better decisions.

Keep the human in the loop!

permalink
report
parent
reply
19 points

Not at all, in this case.

A false positive of even 50% can mean telling the patient “they are at a higher risk of developing breast cancer and should get screened every 6 months instead of every year for the next 5 years”.

Keep in mind that women have about a 12% chance of getting breast cancer at some point in their lives. During the highest risk years its a 2 percent chamce per year, so a machine with a 50% false positive for a 5 year prediction would still only be telling like 15% of women to be screened more often.

permalink
report
parent
reply
16 points

Breast imaging already relys on a high false positive rate. False positives are way better than false negatives in this case.

permalink
report
parent
reply
4 points

That’s just not generally true. Mammograms are usually only recommended to women over 40. That’s because the rates of breast cancer in women under 40 are low enough that testing them would cause more harm than good thanks in part to the problem of false positives.

permalink
report
parent
reply
4 points

Nearly 4 out of 5 that progress to biopsy are benign. Nearly 4 times that are called for additional evaluation. The false positives are quite high compared to other imaging. It is designed that way, to decrease the chances of a false negative.

permalink
report
parent
reply
6 points

How would a false positive create more harm? Isn’t it better to cast a wide net and detect more possible cases? Then false negatives are the ones that worry me the most.

permalink
report
parent
reply
13 points

It’s a common problem in diagnostics and it’s why mammograms aren’t recommended to women under 40.

Let’s say you have 10,000 patients. 10 have cancer or a precancerous lesion. Your test may be able to identify all 10 of those patients. However, if it has a false positive rate of 5% that’s around 500 patients who will now get biopsies and potentially surgery that they don’t actually need. Those follow up procedures carry their own risks and harms for those 500 patients. In total, that harm may outweigh the benefit of an earlier diagnosis in those 10 patients who have cancer.

permalink
report
parent
reply
2 points

Well it’d certainly benefit the medical industry. They’d be saddling tons of patients with surgeries, chemotherapy, mastectomy, and other treatments, “because doctor-GPT said so.”

But imagine being a patient getting physically and emotionally altered, plunged into irrecoverable debt, distressing your family, and it all being a whoopsy by some black-box software.

permalink
report
parent
reply
4 points

That’s a good point, that it could burden the system, but why would you ever put someone on chemotherapy for the model described in the paper? It seems more like it could burden the system by increasing the number of patients doing more frequent screening. Someone has to pay for all those docter-patient and meeting hours for sure. But the benefit outweighs this cost (which in my opinion is good and cheap since it prevents future treatment at later stages that are expensive).

permalink
report
parent
reply
1 point
Deleted by creator
permalink
report
parent
reply
59 points

The most beneficial application of AI like this is to reverse-engineer the neural network to figure out how the AI works. In this way we may discover a new technique or procedure, or we might find out the AI’s methods are bullshit. Under no circumstance should we accept a “black box” explanation.

permalink
report
reply
23 points

good luck reverse-engineering millions if not billions of seemingly random floating point numbers. It’s like visualizing a graph in your mind by reading an array of numbers, except in this case the graph has as many dimensions as the neural network has inputs, which is the number of pixels the input image has.

Under no circumstance should we accept a “black box” explanation.

Go learn at least basic principles of neural networks, because this your sentence alone makes me want to slap you.

permalink
report
parent
reply
13 points

Don’t worry, researchers will just get an AI to interpret all those floating point numbers and come up with a human-readable explanation! What could go wrong? /s

permalink
report
parent
reply
4 points

Hey look, this took me like 5 minutes to find.

Censius guide to AI interpretability tools

Here’s a good thing to wonder: if you don’t know how you’re black box model works, how do you know it isn’t racist?

Here’s what looks like a university paper on interpretability tools:

As a practical example, new regulations by the European Union proposed that individuals affected by algorithmic decisions have a right to an explanation. To allow this, algorithmic decisions must be explainable, contestable, and modifiable in the case that they are incorrect.

Oh yeah. I forgot about that. I hope your model is understandable enough that it doesn’t get you in trouble with the EU.

Oh look, here you can actually see one particular interpretability tool being used to interpret one particular model. Funny that, people actually caring what their models are using to make decisions.

Look, maybe you were having a bad day, or maybe slapping people is literally your favorite thing to do, who am I to take away mankind’s finer pleasures, but this attitude of yours is profoundly stupid. It’s weak. You don’t want to know? It doesn’t make you curious? Why are you comfortable not knowing things? That’s not how science is propelled forward.

permalink
report
parent
reply
5 points

“Enough” is doing a fucking ton of heavy lifting there. You cannot explain a terabyte of floating point numbers. Same way you cannot guarantee a specific doctor or MRI technician isn’t racist.

permalink
report
parent
reply
3 points

interpretability costs money though :v

permalink
report
parent
reply
20 points

iirc it recently turned out that the whole black box thing was actually a bullshit excuse to evade liability, at least for certain kinds of model.

permalink
report
parent
reply
6 points

Link?

permalink
report
parent
reply
13 points

This ones from 2019 Link
I was a bit off the mark, its not that the models they use aren’t black boxes its just that they could have made them interpretable from the beginning and chose not to, likely due to liability.

permalink
report
parent
reply
6 points

Well in theory you can explain how the model comes to it’s conclusion. However I guess that 0.1% of the “AI Engineers” are actually capable of that. And those costs probably 100k per month.

permalink
report
parent
reply
4 points

It depends on the algorithms used. Now the lazy approach is to just throw neural networks at everything and waste immense computation ressources. Of course you then get results that are difficult to interpret. There is much more efficient algorithms that are working well to solve many problems and give you interpretable decisions.

permalink
report
parent
reply
12 points

IMO, the “black box” thing is basically ML developers hand waiving and saying “it’s magic” because they know it will take way too long to explain all the underlying concepts in order to even start to explain how it works.

I have a very crude understanding of the technology. I’m not a developer, I work in IT support. I have several friends that I’ve spoken to about it, some of whom have made fairly rudimentary machine learning algorithms and neural nets. They understand it, and they’ve explained a few of the concepts to me, and I’d be lying if I said that none of it went over my head. I’ve done programming and development, I’m senior in my role, and I have a lifetime of technology experience and education… And it goes over my head. What hope does anyone else have? If you’re not a developer or someone ML-focused, yeah, it’s basically magic.

I won’t try to explain. I couldn’t possibly recall enough about what has been said to me, to correctly explain anything at this point.

permalink
report
parent
reply
22 points

The AI developers understand how AI works, but that does not mean that they understand the thing that the AI is trained to detect.

For instance, the cutting edge in protein folding (at least as of a few years ago) is Google’s AlphaFold. I’m sure the AI researchers behind AlphaFold understand AI and how it works. And I am sure that they have an above average understanding of molecular biology. However, they do not understand protein folding better than the physisits and chemists who have spent their lives studying the field. The core of their understanding is “the answer is somewhere in this dataset. All we need to do is figure out how to through ungoddly amounts of compute at it, and we can make predictions”. Working out how to productivly throw that much compute power at a problem is not easy either, and that is what ML researchers understand and are experts in.

In the same way, the researchers here understand how to go from a large dataset of breast images to cancer predictions, but that does not mean they have any understanding of cancer. And certainly not a better understanding than the researchers who have spent their lives studying it.

An open problem in ML research is how to take the billions of parameters that define an ML model and extract useful information that can provide insights to help human experts understand the system (both in general, and in understanding the reasoning for a specific classification). Progress has been made here as well, but it is still a long way from being solved.

permalink
report
parent
reply
3 points

Thank you for giving some insights into ML, that is now often just branded “AI”. Just one note though. There is many ML algorithms that do not employ neural networks. They don’t have billions of parameters. Especially in binary choice image recognition (looks like cancer or no) stuff like support vector machines achieve great results and they have very few parameters.

permalink
report
parent
reply
9 points
*

y = w^T x

hope this helps!

permalink
report
parent
reply
7 points

our brain is a black box, we accept that. (and control the outcomes with procedures, checklists, etc)

It feels like lots of prefessionals can’t exactly explain every single aspect of how they do what they do, sometimes it just feels right.

permalink
report
parent
reply
-3 points

What a vague and unprovable thing you’ve stated there.

permalink
report
parent
reply

Science Memes

!science_memes@mander.xyz

Create post

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don’t throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

Community stats

  • 13K

    Monthly active users

  • 3.4K

    Posts

  • 84K

    Comments