“statement headline” + “and here’s how you should think” = fuck right the unholy toe fungal hell off.
It’s an opinion piece, they start out with their claim and try to back it up, it’s not a news article, what is the problem?
When I read the title I sarcastically thought “Oh no, why is AI deciding to create fake historical photos? Is this the first stage of the robot apocalypse?” I find the title mildly annoying because it putting the blame on the tool and ignoring that people are using it to do bad things. I find a lot of discussions about AI do this. It is like people want to avoid that it is how people are using and training the tool is the issue.
Isn’t the tool part of the issue? If you sell bomb-making parts to someone who then blows up a preschool with them, aren’t you in some way culpable for giving them the tool to do it? Even if you only intended it to be used in limestone quarries?
Maybe if the tool’s singular purpose was for killing. I think guns might be a better metaphor there. Explosives have legitimate uses and if you took the proper precautions to vet your customers then it’d be hard to blame you if someone convincingly forged credentials, for example.
That really depends on whether the bomb making part is specific to bombs, and if their purchase of that item could be considered legitimately suspicious. Many over the counter products have the potential to be turned into bombs with enough time or effort.
If a murderer uses a hammer, do you think the hardware store they purchased the hammer from should be liable?
You can make crude chemical weapons by mixing bleach with other household items. Should the supermarket be liable for people who use their products in ways they never intended?
I would say the supplier is culpable if the tool supplied is made for the purpose of the harm intended or if the supplier is giving the tool to the person who does the harm with the explicit intent for that person to use it for that harm. For example, giving someone an AK-47 to shoot someone or a handgun/rifle with the intent that the user shoot someone with it. If the supplier gives someone a tool to use for one legit purpose but the user uses it for a harmful purpose instead, I don’t think you can blame the supplier for that. For example, giving someone a knife to cut food with, and then the user goes and stabs someone with it instead. That’s entirely on the user and nobody else.
At this point that’s the equivalent of complaining about people calling gun violence a problem because “guns don’t kill people, people kill people”. If you hand the public easy access to a dangerous tool then of course they’re going to use it to do dangerous things. It’s important to recognize the inherent danger of said tool.
AI is more like torrents, password cracking software, TOR, ect than guns. Just because they can be used for bad or illegal things doesn’t mean those software programs are bad. When companies in the past tried to get certain software banned they ran into the issue that if it could be used for legal reasons that enough for them to exist legally.
Now AI does have the issues with how it is trained so the AI itself can be problematic.
I didn’t say we shouldn’t talk about the problem with the AI I have issues with people making the AI the complete issue ignoring that people use the AI. It reminds me of how automakers tried to make the people driving cars the reason for deaths in car crashes.Thankfully that didn’t work and automakers where forced to make cars safer making it safer on the road. It didn’t stop car crashes from happening since the human element is there. Which there are things in place that partly address that (Such as Driver’s license test, taking away some people’s driver’s license, ads reminding people of the rules of the road.). I’m annoyed that articles are doing the opposite of what car makers did. Humans are using the AI to do bad stuff mentioned that also! How can we change that? Yeah, it will probably be best to do something to the AI program, but we can’t ignore the human element since they the one who are creating the AI, using the AI, and consuming the AI products.
People use guns to kill people so we need to look at both to make it happen less.
The article opens:
When I first started colorizing photos back in 2015, some of the reactions I got were, well, pretty intense. I remember people sending me these long, passionate emails, accusing me of falsifying and manipulating history.
So this is hardly an AI-specific issue. It’s always been something to be on guard for. As others in this thread have pointed out, Stalin was airbrushing out political rivals from photos back in the 30s. Heck damnatio memoriae goes back as far as history itself does. Ancient Pharoahs would have the names of their predecessors chiseled off of monuments so they could “claim” them as their own work.
I mean, the ability to churn out maybe amounts of these fake photos with no effort on the part of the user, causing them to pollute real Internet searches (also now “augmented” by MLB themselves) is definitely AI specific.
Also, colorizing photos is not the same thing as making fake ones.
The internet has never been a reliable source of information. The only thing that changes is how safe you feel about it. When the internet first began it was mysterious and scary, then at some point people felt safe, now we go back to scary.
People should not feel safe on the internet. It is inherently unsafe.
Is there a non zero chance Nero was slandered by political opponents? Remember reading that on one of those old “secret history” type books.
Counterpoint, nuh uh.
There are lessons to learn from the past. I’ll give you that. Those who don’t learn from the past are doomed to repeat it. I’ll give you that.
80% of humanity is too stupid to learn from the past.
Letting them live in fantasy worlds of Make Believe causes no deleterious effects to you or to the Future.
These people who consume this material will choose to voluntarily remain stupid if given the opportunity to make that decision.
After all, to those to whom the truth would be misery, ignorance is bliss and it is folly to be wise.
It was ever thus:
A lie gets halfway around the world before the truth has a chance to get its pants on.
Winston Churchill
lol it’s “boots,” but I like pants better. Makes the truth seem so much cooler ‘cause it was fuuuuuuuuuckin
See, even quotes with errors in them get upvoted before someone can come along and correct them :)
Ah, so one of those clever case-in-point lemmy comments. Very clever. Your plan was masterful