9 points

Not my proudest fap…

permalink
report
reply
2 points
*

Honestly with all respect that is really shitty joke. It’s god damn breast cancer, opposite of hot

I usually just skip them mouldy jokes but like cmon that is beyond the scale of cringe

permalink
report
parent
reply
2 points

Terrible things happen to people you love, you have two choices in this life. You can laugh about it or you can cry about it. You can do one and then the other if you choose. I prefer to laugh about most things and hope others will do the same. Cheers.

permalink
report
parent
reply
2 points
*

I mean do whatever you want but it just comes off as repulsive. like a stain of shit on the new shoes.
This is public space after all, not the bois locker room so that might be embarrassing for you.

And you know you can always count on me to point stuff out so you can avoid humiliation in the future

permalink
report
parent
reply
7 points

That’s a challenging wank

permalink
report
parent
reply
4 points

Man I miss him.

permalink
report
parent
reply
59 points

The most beneficial application of AI like this is to reverse-engineer the neural network to figure out how the AI works. In this way we may discover a new technique or procedure, or we might find out the AI’s methods are bullshit. Under no circumstance should we accept a “black box” explanation.

permalink
report
reply
20 points

iirc it recently turned out that the whole black box thing was actually a bullshit excuse to evade liability, at least for certain kinds of model.

permalink
report
parent
reply
4 points

It depends on the algorithms used. Now the lazy approach is to just throw neural networks at everything and waste immense computation ressources. Of course you then get results that are difficult to interpret. There is much more efficient algorithms that are working well to solve many problems and give you interpretable decisions.

permalink
report
parent
reply
6 points

Well in theory you can explain how the model comes to it’s conclusion. However I guess that 0.1% of the “AI Engineers” are actually capable of that. And those costs probably 100k per month.

permalink
report
parent
reply
6 points

Link?

permalink
report
parent
reply
13 points

This ones from 2019 Link
I was a bit off the mark, its not that the models they use aren’t black boxes its just that they could have made them interpretable from the beginning and chose not to, likely due to liability.

permalink
report
parent
reply
23 points

good luck reverse-engineering millions if not billions of seemingly random floating point numbers. It’s like visualizing a graph in your mind by reading an array of numbers, except in this case the graph has as many dimensions as the neural network has inputs, which is the number of pixels the input image has.

Under no circumstance should we accept a “black box” explanation.

Go learn at least basic principles of neural networks, because this your sentence alone makes me want to slap you.

permalink
report
parent
reply
4 points

Hey look, this took me like 5 minutes to find.

Censius guide to AI interpretability tools

Here’s a good thing to wonder: if you don’t know how you’re black box model works, how do you know it isn’t racist?

Here’s what looks like a university paper on interpretability tools:

As a practical example, new regulations by the European Union proposed that individuals affected by algorithmic decisions have a right to an explanation. To allow this, algorithmic decisions must be explainable, contestable, and modifiable in the case that they are incorrect.

Oh yeah. I forgot about that. I hope your model is understandable enough that it doesn’t get you in trouble with the EU.

Oh look, here you can actually see one particular interpretability tool being used to interpret one particular model. Funny that, people actually caring what their models are using to make decisions.

Look, maybe you were having a bad day, or maybe slapping people is literally your favorite thing to do, who am I to take away mankind’s finer pleasures, but this attitude of yours is profoundly stupid. It’s weak. You don’t want to know? It doesn’t make you curious? Why are you comfortable not knowing things? That’s not how science is propelled forward.

permalink
report
parent
reply
5 points

“Enough” is doing a fucking ton of heavy lifting there. You cannot explain a terabyte of floating point numbers. Same way you cannot guarantee a specific doctor or MRI technician isn’t racist.

permalink
report
parent
reply
3 points

interpretability costs money though :v

permalink
report
parent
reply
13 points

Don’t worry, researchers will just get an AI to interpret all those floating point numbers and come up with a human-readable explanation! What could go wrong? /s

permalink
report
parent
reply
12 points

IMO, the “black box” thing is basically ML developers hand waiving and saying “it’s magic” because they know it will take way too long to explain all the underlying concepts in order to even start to explain how it works.

I have a very crude understanding of the technology. I’m not a developer, I work in IT support. I have several friends that I’ve spoken to about it, some of whom have made fairly rudimentary machine learning algorithms and neural nets. They understand it, and they’ve explained a few of the concepts to me, and I’d be lying if I said that none of it went over my head. I’ve done programming and development, I’m senior in my role, and I have a lifetime of technology experience and education… And it goes over my head. What hope does anyone else have? If you’re not a developer or someone ML-focused, yeah, it’s basically magic.

I won’t try to explain. I couldn’t possibly recall enough about what has been said to me, to correctly explain anything at this point.

permalink
report
parent
reply
22 points

The AI developers understand how AI works, but that does not mean that they understand the thing that the AI is trained to detect.

For instance, the cutting edge in protein folding (at least as of a few years ago) is Google’s AlphaFold. I’m sure the AI researchers behind AlphaFold understand AI and how it works. And I am sure that they have an above average understanding of molecular biology. However, they do not understand protein folding better than the physisits and chemists who have spent their lives studying the field. The core of their understanding is “the answer is somewhere in this dataset. All we need to do is figure out how to through ungoddly amounts of compute at it, and we can make predictions”. Working out how to productivly throw that much compute power at a problem is not easy either, and that is what ML researchers understand and are experts in.

In the same way, the researchers here understand how to go from a large dataset of breast images to cancer predictions, but that does not mean they have any understanding of cancer. And certainly not a better understanding than the researchers who have spent their lives studying it.

An open problem in ML research is how to take the billions of parameters that define an ML model and extract useful information that can provide insights to help human experts understand the system (both in general, and in understanding the reasoning for a specific classification). Progress has been made here as well, but it is still a long way from being solved.

permalink
report
parent
reply
3 points

Thank you for giving some insights into ML, that is now often just branded “AI”. Just one note though. There is many ML algorithms that do not employ neural networks. They don’t have billions of parameters. Especially in binary choice image recognition (looks like cancer or no) stuff like support vector machines achieve great results and they have very few parameters.

permalink
report
parent
reply
7 points

our brain is a black box, we accept that. (and control the outcomes with procedures, checklists, etc)

It feels like lots of prefessionals can’t exactly explain every single aspect of how they do what they do, sometimes it just feels right.

permalink
report
parent
reply
-3 points

What a vague and unprovable thing you’ve stated there.

permalink
report
parent
reply
9 points
*

y = w^T x

hope this helps!

permalink
report
parent
reply
-2 points

Wanna bet it’s not “AI” ?

permalink
report
reply
11 points

This seems exactly like what I would have referred to as AI before the pandemic. Specifically Deep Learning image processing. In terms of something you can buy off the shelf this is theoretically something the Cognex Vidi Red Tool could be used for. My experience with it is in packaging, but the base concept is the same.

Training a model requires loading images into the software and having a human mark them before having a very powerful CUDA GPU process all of that. Once the model has been trained it can usually be run on a fairly modest PC in comparison.

permalink
report
parent
reply
16 points

It’s probably more “AI” than the LLMs we’ve been plagued with. This sounds more like an application of machine learning, which is a hell of a lot more promising.

permalink
report
parent
reply
3 points

AI and machine learning are very similar (if not identical) things, just one has been turned into a marketing hype word a whole lot more than the other.

permalink
report
parent
reply
4 points

Machine learning is one of the many things that is referred to by “AI”, yes.

My thought is the term “AI” has been overused to uselessness, from the nested if statements that decide how video game enemies move to various kinds of machine learning to large language models.

So I’m personally going to avoid the term.

permalink
report
parent
reply
15 points

Learning machines are ai as well, it’s not really what we picture when we think ai but it is none the less.

permalink
report
parent
reply
0 points

I really wouldn’t call this AI. It is more or less an inage identification system that relies on machine learning.

permalink
report
reply
20 points

That was pretty much the definition of AI before LLM came.

permalink
report
parent
reply
9 points

And much before that it was rule-based machine learning, which was basically databases and fancy inference algorithms. So I guess “AI” has always meant “the most advanced computer science thing which looks kind of intelligent”. It’s only now that it looks intelligent enough to fool laypeople into thinking there actually is intelligence there.

permalink
report
parent
reply
17 points
*

Yes, this is “how it was supposed to be used for”.

The sentence construction quality these days in in freefall.

permalink
report
reply
2 points

Bro, it’s Twitter

permalink
report
parent
reply
3 points

And that excuses it I guess.

permalink
report
parent
reply
0 points

That would be correct, yes.

permalink
report
parent
reply
5 points
*

shrugs you know people have been confidently making these kinds of statements… since written language was invented? I bet the first person who developed written language did it to complain about how this generation of kids don’t know how to write a proper sentence.

What is in freefall is the economy for the middle and working class and basic idea that artists and writers should be compensated, period. What has released us into freefall is that making art and crafting words are shit on by society as not a respectable job worth being paid a living wage for.

There are a terrifying amount of good writers out there, more than there have ever been, both in total number AND per capita.

permalink
report
parent
reply
8 points

This isn’t a creative writing project. This isn’t an artist presenting their work. What in the world did that tangent even come from?

This is just plain speech, written objectively incorrectly.

But go on, I’m sure next I’ll be accused of all the problems of the writing industry or something.

permalink
report
parent
reply
1 point
*

Objectively incorrect according to, who exactly?

permalink
report
parent
reply
5 points

Ironically, if they’d used an LLM, it would have corrected their writing.

permalink
report
parent
reply
1 point

Sure, I definitely overreacted and I honestly was pretty stressed out the day I replied so yeah, fair. I think I have a point, this just wasn’t the salient place for it and I was too tired to realize that in the moment.

permalink
report
parent
reply
5 points

Not everyone’s a native speaker.

permalink
report
parent
reply

Science Memes

!science_memes@mander.xyz

Create post

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don’t throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

Community stats

  • 13K

    Monthly active users

  • 3.4K

    Posts

  • 84K

    Comments