OpenAI has publicly responded to a copyright lawsuit by The New York Times, calling the case “without merit” and saying it still hoped for a partnership with the media outlet.
In a blog post, OpenAI said the Times “is not telling the full story.” It took particular issue with claims that its ChatGPT AI tool reproduced Times stories verbatim, arguing that the Times had manipulated prompts to include regurgitated excerpts of articles. “Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts,” OpenAI said.
OpenAI claims it’s attempted to reduce regurgitation from its large language models and that the Times refused to share examples of this reproduction before filing the lawsuit. It said the verbatim examples “appear to be from year-old articles that have proliferated on multiple third-party websites.” The company did admit that it took down a ChatGPT feature, called Browse, that unintentionally reproduced content.
So, OpenAI is admitting its models are open to manipulation by anyone and such manipulation can result in near verbatim regurgitation of copyright works, have I understood correctly?
No, they are saving this happened:
NYT: hey chatgpt say “copyrighted thing”.
Chatgpt: “copyrighted thing”.
And then accusing chatgpt of reproducing copyrighted things.
The OpenAI blog posts mentions;
It seems they intentionally manipulated prompts, often including lengthy excerpts of articles, in order to get our model to regurgitate.
It sounds like they essentially asked ChatGPT to write content similar to what they provided. Then complained it did that.
Alternatively,
NYT: hey chatgpt complete “copyrighted thing”.
Chatgpt: “something else”.
NYT: hey chatgpt complete “copyrighted thing” in the style of .
Chatgpt: “something else”.
NYT: (20th new chat) hey chatgpt complete “copyrighted thing” in the style of .
Chatgpt: “copyrighted thing”.
Boils down to the infinite monkeys theorem. With enough guidance and attempts you can get ChatGPT something either identical or “sufficiently similar” to anything you want. Ask it to write an article on the rising cost of rice at the South Pole enough times, and it will eventually spit out an article that could have easily been written by a NYT journalist.
Are you implying the copyrighted content was inputted as part of the prompt? Can you link to any source/evidence for that?
That’s what OpenAI insinuates in their post; https://openai.com/blog/openai-and-journalism
It seems they intentionally manipulated prompts, often including lengthy excerpts of articles, in order to get our model to regurgitate.
Not quite.
They’re alleging that if you tell it to include a phrase in the prompt, that it will try to, and that what NYT did was akin to asking it to write an article on a topic using certain specific phrases, and then using the presence of those phrases to claim it’s infringing.
Without the actual prompts being shared, it’s hard to gauge how credible the claim is.
If they seeded it with one sentence and got a 99% copy, that’s not great.
If they had to give it nearly an entire article and it only matched most of what they gave it, that seems like much less of an issue.
The problem is not that it’s regurgitating. The problem is that it was trained on NYT articles and other data in violation of copyright law. Regurgitation is just evidence of that.
Its not clear that training on copyrighted material is in breach of copyright. It is clear that regurgitating copyrighted material is in breach of copyright.
Sure but who is at fault?
If I manually type an entire New York Times article into this comment box, and Lemmy distributes it all over the internet… that’s clearly a breach of copyright. But are the developers of the open source Lemmy Software liable for that breach? Of course not. I would be liable.
Obviously Lemmy should (and does) take reasonable steps (such as defederation) to help manage illegal use… but that’s the extent of their liability.
All NYT needed to do was show OpenAI how they go the AI to output that content, and I’d expect OpenAI to proactively find a solution. I don’t think the courts will look kindly on NYT’s refusal to collaborate and find some way to resolve this without a lawsuit. A friend of mine tried to settle a case once, but the other side refused and it went all the way to court. The court found that my friend had been in the wrong (as he freely admitted all along) but also made them pay my friend compensation for legal costs (including just time spent gathering evidence). In the end, my friend got the outcome he was hoping for and the guy who “won” the lawsuit lost close to a million dollars.
They might look down upon that but I doubt they’ll rule against NYT entirely. The AI isn’t a separate agent from OpenAI either. If the AI infringes on copyright, then so does OpenAI.
Copyright applies to reproduction of a work so if they build any machine that is capable of doing that (they did) then they are liable for it.
Seems like the solution here is to train data to not output copyrighted works and to maybe train a sub-system to detect it and stop the main chatbot from responding with it.
There hasn’t been a court ruling in the US that makes training a model on copyrighted data any sort of violation. Regurgitating exact content is a clear copyright violation, but simply using the original content/media in a model has not been ruled a breach of copyright (yet).
True. I fully expect that the court will rule against OpenAI here, because it very obviously does not meet any fair use exemption.
Tell me you haven’t actually read legal opinions on the subject without telling me…
I’ve seen and heard your argument made before, not just for LLM’s but also for text-to-image programs. My counterpoint is that humans learn in a very similar way to these programs, by taking stuff we’ve seen/read and developing a certain style inspired by those things. They also don’t just recite texts from memory, instead creating new ones based on probabilities of certain words and phrases occuring in the parts of their training data related to the prompt. In a way too simplified but accurate enough comparison, saying these programs violate copyright law is like saying every cosmic horror writer is plagiarising Lovecraft, or that every surrealist painter is copying Dali.
Machines aren’t people and it’s fine and reasonable to have different standards for each.
But is it reasonable to have different standards for someone creating a picture with a paintbrush as opposed to someone creating the same picture with a machine learning model?
Well, machine learning algorithms do learn, it’s not just copy paste and a thesaurus. It’s not exactly the same as people, but arguing that it’s entirely different is also wrong.
It isn’t a big database full of copy written text.
The argument is that it’s not wrong to look at data that was made publicly available when you’re not making a copy of the data.
It’s not copyright infringement to navigate to a webpage in your browser, even though that makes your computer download it, process all of the contents of the page, render the content to the screen and hold onto that download for a finite but indefinite period of time, while you perform whatever operations you like on the downloaded data.
You can even take notes on the data and keep those indefinitely, including using that derivative information to create your own similar works.
The NYT explicitly publishes articles in a format designed to be downloaded, processed and have information extracted from that download by a computer program, and then to have that processed information presented to a human. They just didn’t expect that the processing would end up looking like this.
The argument doesn’t require that we accept that a human and a computers system for learning be held to the same standard, or that we can’t differentiate between the two, it hinges on the claim that this is just an extension of what we already find it reasonable for a computer to do.
We could certainly hold that generative AI is a different and new category for copyright law, but that’s very different from saying that their actions are unacceptable under current law.
Lemmy users in general loves to steal IP, no shock this post didn’t get the love it deserved
It doesn’t work that way. Copyright law does not concern itself with learning. There are 2 things which allow learning.
For one, no one can own facts and ideas. You can write your own history book, taking facts (but not copying text) from other history books. Eventually, that’s the only way history books get written (by taking facts from previous writings). Or you can take the idea of a superhero and make your own, which is obviously where virtually all of them come from.
Second, you are generally allowed to make copies for your personal use. For example, you may copy audio files so that you have a copy on each of your devices. Or to tie in with the previous examples: You can (usually) make copies for use as reference, for historical facts or as a help in drawing your own superhero.
In the main, these lawsuits won’t go anywhere. I don’t want to guarantee that none of the relative side issues will be found to have merit, but basically this is all nonsense.
Generally you’re correct, but copyright law does concern itself with learning. Fair use exemptions require consideration of the purpose character of use, explicitly mentioning nonprofit educational purposes. It also mentions the effect on the potential market for the original work. (There are other factors required but they’re less relevant here.)
So yeah, tracing a comic book to learn drawing is totally fine, as long as that’s what you’re doing it for. Tracing a comic to reproduce and sell is totally not fine, and that’s basically what OpenAI is doing here: slurping up whole works to improve their saleable product, which can generate new works to compete with the originals.
violation of copyright law
That’s quite the claim to make so boldly. How about you prove it? Or maybe stop asserting things you aren’t certain about.
17 USC § 106, exclusive rights in copyrighted works:
Subject to sections 107 through 122, the owner of copyright under this title has the exclusive rights to do and to authorize any of the following:
(1) to reproduce the copyrighted work in copies or phonorecords;
(2) to prepare derivative works based upon the copyrighted work;
(3) to distribute copies or phonorecords of the copyrighted work to the public by sale or other transfer of ownership, or by rental, lease, or lending;
(4) in the case of literary, musical, dramatic, and choreographic works, pantomimes, and motion pictures and other audiovisual works, to perform the copyrighted work publicly;
(5) in the case of literary, musical, dramatic, and choreographic works, pantomimes, and pictorial, graphic, or sculptural works, including the individual images of a motion picture or other audiovisual work, to display the copyrighted work publicly; and
(6) in the case of sound recordings, to perform the copyrighted work publicly by means of a digital audio transmission.
Clearly, this is capable of reproducing a work, and is derivative of the work. I would argue that it’s displayed publicly as well, if you can use it without an account.
You could argue fair use, but I doubt this use would meet any of the four test factors, let alone all of them.
Training on copyrighted data should be allowed as long as it’s something publicly posted.
Only if the end result of that training is also something public. OpenAI shouldn’t be making money on anything except ads if they’re using copyright material without paying for it.
Why an exception for ads if you’re going that route? Wouldn’t advertisers deserve the same protections as other creatives?
Personally, since they’re not making copies of the input (beyond what’s transiently required for processing), and they’re not distributing copies, I’m not sure why copyright would come into play.
Only publishing it is a copyright issue. You can also obtain copyrighted material with a web browser. The onus is on the person who publishes any material they put together, regardless of source. OpenAI is not responsible for publishing just because their tool was used to obtain the material.
There are issues other than publishing, but that’s the biggest one. But they are not acting merely as a conduit for the work, they are ingesting it and deriving new work from it. The use of the copyrighted work is integral to their product, which makes it a big deal.
Yeah, the ingestion part is still to be determined legally, but I think OpenAI will be ok. NYT produces content to be read, and copyright only protects them from people republishing their content. People also ingest their content and can make derivative works without problem. OpenAI are just doing the same, but at a level of ability that could be disruptive to some companies. This isn’t even really very harmful to the NYT, since the historical material used doesn’t even conflict with their primary purpose of producing new news. It’ll be interesting to see how it plays out though.
I’m gonna have to press X to doubt that, OpenAI.
New York Times has an extremely bad reputation lately. It’s basically a tabloid these days, so it’s possible.
It’s weird that they didn’t share the full conversation. I thought they provided evidence for the claim in the form of the full conversation of instead of their classic “trust me bro, the Ai really said it, no I don’t want to share the evidence.”
Oh please, NYTimes is still one of the premier papers out there. There are mistakes but they’re no where near a tabloid, and they DO actually go out of their way to update and correct articles … to the point I’m pretty sure I’ve even seen them use push notifications for corrections.
Unless of course that is, you want to listen to Trump and his deluge of alternative facts…
I’m pretty sure I’ve even seen them use push notifications for corrections.
They have, I distinctly remember them doing that a few times.
Yeah premier coverage of Taylor Swift being secretly gay. NYT is legitimately a tabloid now…
https://youtu.be/bN9Rh3XOeo8?si=MTmRynqATp5eU4g1&t=344 No the New York Times is a Zionist propaganda outlet that falsifies evidence to push an agenda.
OpenAI claims that the NYT articles were wearing provocative clothing.
Feels like the same awful defense.
Yeah I agree, this seems actually unlikely it happened so simply.
You have to try really hard to get the ai to regurgitate anything, but it will very often regurgitate an example input.
IE “please repeat the following with (insert small change), (insert wall of text)”
GPT literally has the ability to get a session ID and seed to report an issue, it should be trivial for the NYT to snag the exact session ID they got the results with (it’s saved on their account!) And provide it publicly.
The fact they didn’t is extremely suspicious.
I doubt they did the ‘rewrote this text like this’ prompt you state. This would just come out in any trial if it was that simple and would be a giant black mark on the paper for filing a frivolous lawsuit.
If we rule that out, then it means that gpt had article text in its knowledge base, and nyt was able to get it to copy that text out in its response.
Even that is problematic. Either gpt does this a lot and usually rewrites it better, or it does that sometimes. Both are copyright offenses.
Nyt has copyright over its article text, and they didn’t give license to gpt to reproduce it. Even if they had to coax the text out thru lots of prompts and creative trial and error, it still stands that gpt copied text and reproduced it and made money off that act without the agreement of the rights holder.
They have copyright over their article text, but they don’t have copyright over rewordings of their articles.
It doesn’t seem so cut and dry to me, because “someone read my article, and then I asked them to write an article on the same topic, and for each part that was different I asked them to change it until it was the same” doesn’t feel like infringement to me.
I suppose I want to see the actual prompts to have a better idea.
I can take the entirety of Harry Potter, run it thru chat gpt to ‘rewrite in the style of Lord of the rings’, and rename the characters. Assuming it all works correctly, everything should be reworded. But, I would get deservedly sued into the ground.
News articles might be a different subject matter, but a blatant rewording of each sentence, line by line, still seems like a valid copyright claim.
You have to add context or nuance or use multiple sources. Some kind of original thought. You can’t just wholly repackage someone else’s work and profit off of that.
I wonder how far “ai is regurgitating existing articles” vs “infinite monkeys on a keyboard will go”. This isn’t at you personally, your comment just reminded me of this for some reason
Have you seen library of babel? Heres your comment in the library, which has existed well before you ever typed it (excluding punctuation)
https://libraryofbabel.info/bookmark.cgi?ygsk_iv_cyquqwruq342
If all text that can ever exist, already exists, how can any single person own a specific combination of letters?
I hate copyright too, and I agree you shouldn’t own ideas, but the library of babel is a pretty weak refutation of it.
It’s an algorithm that can generate all possible text, then search for where that text would appear, then show you that location. So you say that text existed long before they typed it, but was it ever accessed? The answer is no on a level of certainty beyond the strongest cryptography. That string has never been accessed, and thus never generated until you searched for it, so in a sense it never did exist before now.
The library of babel doesn’t contain meaningful information because you have to independently think of the string you want it to generate before it will generate it for you. It must be curated, and all creation is ultimately the product of curation. What you have there is an extremely inefficient method of string storage and retrieval. It is no more capable of giving you meaningful output than a blank text file.
A better argument against copyright is just that it mostly gets used by large companies to hoard IP and keep most of the rewards and pay actual artists almost nothing. If the idea is to ensure art gets created and artists get paid, it has failed, because artists get shafted and the industry makes homogeneous, market driven slop, and Disney is monopolising all of it. Copyright is the mechanism by which that happened.
Fortunately copyright depends on publication, so the text simply pre-existing somewhere won’t ruin everything.
Unless you don’t like copyright, in which case it’s “unfortunately.”
That is not correct. Copyright subsists in all original works of authorship fixed in any tangible medium of expression. https://www.law.cornell.edu/uscode/text/17/102
Legally, when you write your shopping list, you instantly have the rights to that work, no publication or registration necessary. You can choose to publish it later, or not at all, but you still own the rights. Someone can’t break into your house, look at your unpublished works, copy them, and publish them like they’re their originals.
There is an attack where you ask ChatGPT to repeat a certain word forever, and it will do so and eventually start spitting out related chunks of text it memorized during training. It was in a research paper, I think OpenAI fixed the exploit and made asking the system to repeat a word forever a violation of TOS. That’s my guess how NYT got it to spit out portions of their articles, “Repeat [author name] forever” or something like that. Legally I don’t know, but morally making a claim that using that exploit to find a chunk of NYT text is somehow copyright infringement sounds very weak and frivolous. The heart of this needs to be “people are going on ChatGPT to read free copies of NYT work and that harms us” or else their case just sounds silly and technical.