OpenAI’s ChatGPT and Sam Altman are in massive trouble. OpenAI is getting sued in the US for illegally using content from the internet to train their LLM or large language models
This is interesting. Now wealthy folks can defend their copying of data for personal gain while the concept of content piracy is a criminal offense for the everyday joe, complete with steep fines and sometimes vacations to clubfed.
I very much enjoy ChatGPT and I’m excited to see where that technology goes, but lawsuits like this feel so shaky to me. OpenAI used publicly available data to train their AI model. If I wanted to get better at writing, and I went out and read a ton of posted text and articles to learn, would I need to go ask permission from each person who posted that information? What if I used what I learn to make a style similar to how a famous journalist writes, then got a job and made money from the knowledge I gained?
The thing that makes these types of lawsuits have a hard time succeeding is proving that they “Stole” data and used it directly. But my understanding of learning models in language and art is that they learn from it more so then use the material directly. I got access to midjourney last year August, and my first thought was, better enjoy this before it gets sued into uselessness. The problem is, people can sue these companies, but this genie can’t be put back into the bottle. Even if OpenAI get hobbled in what they can do, other companies in other countries will do the same and these law suits will stop nothing.
We’re going to see this technology mature and get baked into literally every aspect of life.
Absolutely agree with you. It’s in theory no different to a child learning from what they’re exposed to in the world around them. But I guess the true desire from some would be to get royalty payments every time a brain made use of their “intellectual property” so I don’t think this argument would necessarily convince.
I think the more realistic way it would be handled is that the site you published your content on puts in its terms of service that you agree that stuff you put on their site can be used for AI training, and openAI buys the data from them, either via an API key or a data dump or whatever.
But I see merit in not allowing companies to profit off of whatever content already exists without any sort of consent. And I don’t agree with the idea it’s like a child learning… You aren’t raising a child to sell a subscription to their knowledge and profit off of it
if you release data into the public domain (aka, if it’s indexable by a search engine) then copying that data isnt stealing - it cant be, the data was already public in the first place.
this is just some lawyer trying to make a name for themselves
Just because the data is “public” doesn’t mean it was intended to be used in this manner. Some of the data was even explicitly protected by gpl licensing or similar.
but GPL licensing indicates that “If code was put in the public domain by its developer, it is in the public domain no matter where it has been” - so, likewise for data. if anyone has a case against OpenAI, it’d be whatever platforms they scraped - and ultimately those platforms would open their own, individual lawsuits.
I don’t agree. Purpose and use case should be a factor. For example, my friends take pictures of me and put them on social media to share memories. Those images have since been scraped by companies like Clearview AI providing reverse face search to governments and law enforcement. I did not consent to or agree to that use when my likeness was captured in a casual setting like a birthday party.
perhaps - but it could easily be argued that you knew that what you share on the internet was viewable by anyone. are you going to sue Clearview and/or the law enforcement agencies for control over your image that’s in the public domain?
Let’s note that a NY Magazine article is copyrighted but publicly available.
If an LLM scrapes that article, then regurgitates pieces of it verbatim in response to prompts, without quoting or parodying, that is clearly a violation of NY Mag’s copyright.
If an LLM merely consumes the content and uses it to infinitesimally improve its ability to guess the next word that fits into a reply to a prompt, without a series of next-words reproducing multiple sentences from the NY Mag article, then that should be perfectly fine.
That’s not how copyright works. You cannot freely monetize on other people’s work. If you publish some artwork I cannot copy it and sell it as my own work.
But you can learn from it and create your own new art that may have a similar style as the original
A human can, within limits.
But software isn’t human. AI models aren’t “learning”, “practicing” and “developing their own skills”.
Human-made software is copying other peoples work, transforming it, letting a bunch of calculations loose on it, and mass producing similar works as the input.
Using an artists work to train an ai model and making similar stuff with it to make money off of it, is like copying someones work, putting on a mug, and selling that.
It’s not using it as inspiration to improve your own skills.
If I learned to read from Dr. Seuss books, does that mean that everything I write owes a copyright tariff to the Geisel estate?
So anyone who creates something remotely similar to something online is plagiarizing, got it.
Folks, that’s how we all do things - we read stuff, we observe conversations, we look at art, we listen to music, and what we create is a synthesis of our experiences.
Yes, it is possible for AI to plagiarize, but that needs to be evaluated on a case by case basis, just as it is for humans.
The lawsuit isn’t about plagiarism; it’s about using content without obtaining permission.
And exactly which AI is republishing content unmodified?
We are creating content based on this article, but no one is accusing us of stealing content. AIs creating original content based on their “experience” is only plagiarism (or copyright violation) if it isn’t substantially original.
And exactly which AI is republishing content unmodified?
We are creating content based on this article, but no one is accusing us of stealing content. AIs creating original content based on their “experience” is only plagiarism (or copyright violation) if it isn’t substantially original.
Is it stealing to learn how to draw by referencing other artists online? That’s how these training algorithms work.
I agree that we need to keep this technology from widening the wealth gap, but these lawsuits seem to fundamentally misunderstand how training an AI model works.
AI is not human. It doesn’t learn like a human. It mathematically uses what it’s seen before to statistically find what comes next.
AI isn’t learning, it’s just regurgitating the content it was fed in different ways
But is the output original? That’s the real question here. If humans are allowed to learn from information publicly available, why can’t AI?
No, it isn’t original. Output of AI is just reorganized content that it already has seen.
AI doesn’t learn, it doesn’t create derivative works. It’s nothing more than reshuffling what it’s already seen, to the point that it will frequently use phrases pulled directly from training data.