The American Matthew Butterick has started a legal crusade against generative artificial intelligence (AI). In 2022, he filed the first lawsuit in the history of this field against Microsoft, one of the companies that develop these types of tools (GitHub Copilot). Today, he’s coordinating four class action lawsuits that bring together complaints filed by programmers, artists and writers.
If successful, he could force the companies responsible for applications such as ChatGPT or Midjourney to compensate thousands of creators. They may even have to retire their algorithms and retrain them with databases that don’t infringe on intellectual property rights.
I don’t see the US restricting AI development. No matter what is morally right or wrong, this is strategically important, and they won’t kneecap themselves in the global competition.
Great power competition / military industrial complex . AI is a pretty vague term, but practically it could be used to describe drone swarming technology, cyber warfare, etc.
LLM-based chatbot and image generators are the types of “AI” that rely on stealing people’s intellectual property. I’m struggling to see how that applies to “drone swarming technology.” The only obvious use case is in the generation of propaganda.
Luddites smashing power looms.
It’s worth remembering that the Luddites were not against technology. They were against technology that replaced workers, without compensating them for the loss, so the owners of the technology could profit.
Their problem was that they smashed too many looms and not enough capitalists. AI training isn’t just for big corporations. We shouldn’t applaud people that put up barriers that will make it prohibitively expensive to for regular people to keep up. This will only help the rich and give corporations control over a public technology.
It should be prohibitively expensive for anyone to steal from regular people, whether it’s big companies or other regular people. I’m not more enthusiastic about the idea of people stealing from artists to create open source AIs than I am when corporations do it. For an open source AI to be worth the name, it would have to use only open source training data - ie, stuff that is in the public domain or has been specifically had an open source licence assigned to it. If the creator hasn’t said they’re okay with their content being used for AI training, then it’s not valid for use in an open source AI.
Moreover, Luddites were opposed to the replacement of independent at-home workers by oppressed factory child labourers. Much like OpenAI aims to replace creative professionals by an army of precarious poorly paid microworkers.
Yep! And it’s not like a lot of creative professionals are paid all that well right now. The tech and finance industries do not value creatives.
If successful, he could force the companies responsible for applications such as ChatGPT or Midjourney to compensate thousands of creators. They may even have to retire their algorithms and retrain them with databases that don’t infringe on intellectual property rights.
They will readily agree to this after having made their money and use their ill gotten gains to train a new model. The rest of us will have to go pound sand as making a new model will have been made prohibitively expensive. Good intentions, but it will only help them by pulling up the ladder behind them.
I’m an artist and I can guarantee his lawsuits will accomplish jack squat for people like me. In fact, if successful, it will likely hurt artists trying to adapt to AI. Let’s be serious here, copyright doesn’t really protect artists, it’s a club for corporations to swing around to control our culture. AI isn’t the problem, capitalism is.
Maybe licensing comments isn’t that crazy, aye? 😉