Not my proudest fap…
Honestly with all respect that is really shitty joke. It’s god damn breast cancer, opposite of hot
I usually just skip them mouldy jokes but like cmon that is beyond the scale of cringe
Terrible things happen to people you love, you have two choices in this life. You can laugh about it or you can cry about it. You can do one and then the other if you choose. I prefer to laugh about most things and hope others will do the same. Cheers.
I mean do whatever you want but it just comes off as repulsive. like a stain of shit on the new shoes.
This is public space after all, not the bois locker room so that might be embarrassing for you.
And you know you can always count on me to point stuff out so you can avoid humiliation in the future
The most beneficial application of AI like this is to reverse-engineer the neural network to figure out how the AI works. In this way we may discover a new technique or procedure, or we might find out the AI’s methods are bullshit. Under no circumstance should we accept a “black box” explanation.
iirc it recently turned out that the whole black box thing was actually a bullshit excuse to evade liability, at least for certain kinds of model.
It depends on the algorithms used. Now the lazy approach is to just throw neural networks at everything and waste immense computation ressources. Of course you then get results that are difficult to interpret. There is much more efficient algorithms that are working well to solve many problems and give you interpretable decisions.
good luck reverse-engineering millions if not billions of seemingly random floating point numbers. It’s like visualizing a graph in your mind by reading an array of numbers, except in this case the graph has as many dimensions as the neural network has inputs, which is the number of pixels the input image has.
Under no circumstance should we accept a “black box” explanation.
Go learn at least basic principles of neural networks, because this your sentence alone makes me want to slap you.
Hey look, this took me like 5 minutes to find.
Censius guide to AI interpretability tools
Here’s a good thing to wonder: if you don’t know how you’re black box model works, how do you know it isn’t racist?
Here’s what looks like a university paper on interpretability tools:
As a practical example, new regulations by the European Union proposed that individuals affected by algorithmic decisions have a right to an explanation. To allow this, algorithmic decisions must be explainable, contestable, and modifiable in the case that they are incorrect.
Oh yeah. I forgot about that. I hope your model is understandable enough that it doesn’t get you in trouble with the EU.
Oh look, here you can actually see one particular interpretability tool being used to interpret one particular model. Funny that, people actually caring what their models are using to make decisions.
Look, maybe you were having a bad day, or maybe slapping people is literally your favorite thing to do, who am I to take away mankind’s finer pleasures, but this attitude of yours is profoundly stupid. It’s weak. You don’t want to know? It doesn’t make you curious? Why are you comfortable not knowing things? That’s not how science is propelled forward.
“Enough” is doing a fucking ton of heavy lifting there. You cannot explain a terabyte of floating point numbers. Same way you cannot guarantee a specific doctor or MRI technician isn’t racist.
IMO, the “black box” thing is basically ML developers hand waiving and saying “it’s magic” because they know it will take way too long to explain all the underlying concepts in order to even start to explain how it works.
I have a very crude understanding of the technology. I’m not a developer, I work in IT support. I have several friends that I’ve spoken to about it, some of whom have made fairly rudimentary machine learning algorithms and neural nets. They understand it, and they’ve explained a few of the concepts to me, and I’d be lying if I said that none of it went over my head. I’ve done programming and development, I’m senior in my role, and I have a lifetime of technology experience and education… And it goes over my head. What hope does anyone else have? If you’re not a developer or someone ML-focused, yeah, it’s basically magic.
I won’t try to explain. I couldn’t possibly recall enough about what has been said to me, to correctly explain anything at this point.
The AI developers understand how AI works, but that does not mean that they understand the thing that the AI is trained to detect.
For instance, the cutting edge in protein folding (at least as of a few years ago) is Google’s AlphaFold. I’m sure the AI researchers behind AlphaFold understand AI and how it works. And I am sure that they have an above average understanding of molecular biology. However, they do not understand protein folding better than the physisits and chemists who have spent their lives studying the field. The core of their understanding is “the answer is somewhere in this dataset. All we need to do is figure out how to through ungoddly amounts of compute at it, and we can make predictions”. Working out how to productivly throw that much compute power at a problem is not easy either, and that is what ML researchers understand and are experts in.
In the same way, the researchers here understand how to go from a large dataset of breast images to cancer predictions, but that does not mean they have any understanding of cancer. And certainly not a better understanding than the researchers who have spent their lives studying it.
An open problem in ML research is how to take the billions of parameters that define an ML model and extract useful information that can provide insights to help human experts understand the system (both in general, and in understanding the reasoning for a specific classification). Progress has been made here as well, but it is still a long way from being solved.
Thank you for giving some insights into ML, that is now often just branded “AI”. Just one note though. There is many ML algorithms that do not employ neural networks. They don’t have billions of parameters. Especially in binary choice image recognition (looks like cancer or no) stuff like support vector machines achieve great results and they have very few parameters.
our brain is a black box, we accept that. (and control the outcomes with procedures, checklists, etc)
It feels like lots of prefessionals can’t exactly explain every single aspect of how they do what they do, sometimes it just feels right.
Wanna bet it’s not “AI” ?
This seems exactly like what I would have referred to as AI before the pandemic. Specifically Deep Learning image processing. In terms of something you can buy off the shelf this is theoretically something the Cognex Vidi Red Tool could be used for. My experience with it is in packaging, but the base concept is the same.
Training a model requires loading images into the software and having a human mark them before having a very powerful CUDA GPU process all of that. Once the model has been trained it can usually be run on a fairly modest PC in comparison.
It’s probably more “AI” than the LLMs we’ve been plagued with. This sounds more like an application of machine learning, which is a hell of a lot more promising.
AI and machine learning are very similar (if not identical) things, just one has been turned into a marketing hype word a whole lot more than the other.
Machine learning is one of the many things that is referred to by “AI”, yes.
My thought is the term “AI” has been overused to uselessness, from the nested if statements that decide how video game enemies move to various kinds of machine learning to large language models.
So I’m personally going to avoid the term.
I really wouldn’t call this AI. It is more or less an inage identification system that relies on machine learning.
And much before that it was rule-based machine learning, which was basically databases and fancy inference algorithms. So I guess “AI” has always meant “the most advanced computer science thing which looks kind of intelligent”. It’s only now that it looks intelligent enough to fool laypeople into thinking there actually is intelligence there.
Yes, this is “how it was supposed to be used for”.
The sentence construction quality these days in in freefall.
shrugs you know people have been confidently making these kinds of statements… since written language was invented? I bet the first person who developed written language did it to complain about how this generation of kids don’t know how to write a proper sentence.
What is in freefall is the economy for the middle and working class and basic idea that artists and writers should be compensated, period. What has released us into freefall is that making art and crafting words are shit on by society as not a respectable job worth being paid a living wage for.
There are a terrifying amount of good writers out there, more than there have ever been, both in total number AND per capita.
This isn’t a creative writing project. This isn’t an artist presenting their work. What in the world did that tangent even come from?
This is just plain speech, written objectively incorrectly.
But go on, I’m sure next I’ll be accused of all the problems of the writing industry or something.
Ironically, if they’d used an LLM, it would have corrected their writing.