Two authors sued OpenAI, accusing the company of violating copyright law. They say OpenAI used their work to train ChatGPT without their consent.

109 points

If I read a book to inform myself, put my notes in a database, and then write articles, it is called “research”. If I write a computer program to read a book to put the notes in my database, it is called “copyright infringement”. Is the problem that there just isn’t a meatware component? Or is it that the OpenAI computer isn’t going a good enough job of following the “three references” rule to avoid plagiarism?

permalink
report
reply
71 points

Yeah. There are valid copyright claims because there are times that chat GPT will reproduce stuff like code line for line over 10 20 or 30 lines which is really obviously a violation of copyright.

However, just pulling in a story from context and then summarizing it? That’s not a copyright violation that’s a book report.

permalink
report
parent
reply
48 points

Or is it that the OpenAI computer isn’t going a good enough job of following the “three references” rule to avoid plagiarism?

This is exactly the problem, months ago I read that AI could have free access to all public source codes on GitHub without respecting their licenses.

So many developers have decided to abandon GitHub for other alternatives not realizing that in the end AI training can safely access their public repos on other platforms as well.

What should be done is to regulate this training, which however is not convenient for companies because the more data the AI ingests, the more its knowledge expands and “helps” the people who ask for information.

permalink
report
parent
reply
42 points

It’s incredibly convenient for companies.

Big companies like open AI can easily afford to download big data sets from companies like Reddit and deviantArt who already have the permission to freely use whatever work you upload to their website.

Individual creators do not have that ability and the act of doing this regulation will only force AI into the domain of these big companies even more than it already is.

Regulation would be a hideously bad idea that would lock these powerful tools behind the shitty web APIs that nobody has control over but the company in question.

Imagine the world is the future, magical new age technology, and Facebook owns all of it.

Do not allow that to happen.

permalink
report
parent
reply
17 points

Is it practically feasible to regulate the training? Is it even necessary? Perhaps it would be better to regulate the output instead.

It will be hard to know that any particular GET request is ultimately used to train an AI or to train a human. It’s currently easy to see if a particular output is plagiarized. https://plagiarismdetector.net/ It’s also much easier to enforce. We don’t need to care if or how any particular model plagiarized work. We can just check if plagiarized work was produced.

That could be implemented directly in the software, so it didn’t even output plagiarized material. The legal framework around it is also clear and fairly established. Instead of creating regulations around training we can use the existing regulations around the human who tries to disseminate copyrighted work.

That’s also consistent with how we enforce copyright in humans. There’s no law against looking at other people’s work and memorizing entire sections. It’s also generally legal to reproduce other people’s work (eg for backups). It only potentially becomes illegal if someone distributes it and it’s only plagiarism if they claim it as their own.

permalink
report
parent
reply
2 points

This makes perfect sense. Why aren’t they going about it this way then?

My best guess is that maybe they just see openAI being very successful and wanting a piece of that pie? Cause if someone produces something via chatGPT (let’s say for a book) and uses it, what are they chances they made any significant amount of money that you can sue for?

permalink
report
parent
reply
7 points

Plus, any regulation to limit this now means that anyone not already in the game will never breakthrough. It’s going to be the domain of the current players for years, if not decades. So, not sure what’s better, the current wild west where everyone can make something, or it being exclusive to the already big players and them closing the door behind

permalink
report
parent
reply
4 points

My concern here is that OpenAI didn’t have to share gpt with the world. These lawsuits are going to discourage companies from doing that in the future, which means well funded companies will just keep it under wraps. Once one of them eventually figures out AGI, they’ll just use it internally until they dominate everything. Suddenly, Mark Zuckerberg is supreme leader and we all have to pledge allegiance to Facebook.

permalink
report
parent
reply
2 points

AI could have free access to all public source codes on GitHub without respecting their licenses.

IANAL, but aren’t their licenses are being respected up until they are put into a codebase? At least insomuch as Google is allowed to display code snippets in the preview when you look up a file in a GitHub repo, or you are allowed to copy a snippet to a StackOverflow discussion or ticket comment.

I do agree regulation is a very good idea, in more ways than just citation given the potential economic impacts that we seem clearly unprepared for.

permalink
report
parent
reply
12 points

The fear is that the books are in one way or another encoded into the machine learning model, and that the model can somehow retrieve excerpts of these books.

Part of the training process of the model is to learn how to plagiarize the text word for word. The training input is basically “guess the next word of this excerpt”. This is quite different compared to how humans do research.

To what extent the books are encoded in the model is difficult to know. OpenAI isn’t exactly open about their models. Can you make ChatGPT print out entire excerpts of a book?

It’s quite a legal gray zone. I think it’s good that this is tried in court, but I’m afraid the court might have too little technical competence to make a ruling.

permalink
report
parent
reply
10 points
*

Say I see a book that sells well. It’s in a language I don’t understand, but I use a thesaurus to replace lots of words with synonyms. I switch some sentences around, and maybe even mix pages from similar books into it. I then go and sell this book (still not knowing what the book actually says).

I would call that copyright infringement. The original book didn’t inspire me, it didn’t teach me anything, and I didn’t add any of my own knowledge into it. I didn’t produce any original work, I simply mixed a bunch of things I don’t understand.

That’s what these language models do.

permalink
report
parent
reply
5 points
*

What about… they are making billions from that “read” and “storage” of information copyrighted from other people. They need to at least give royalties. This is like google behavior, using people data from “free” products to make billions. I would say they also need to pay people from the free data they crawled and monetized.

permalink
report
parent
reply
1 point

I’d say the main difference is that AI companies are profiting off of the training material, which seem unethical/illegal.

permalink
report
parent
reply
-28 points

I honestly do not care whether it is or is not copyright infringment, just hope to see “AI” burn :3

permalink
report
parent
reply
29 points

AI isnt a boogyman, it’s a set of tools. No chance it’s going away even if Open AI suddenly disappeared.

permalink
report
parent
reply
-9 points

I understand, but I will continue to stubbornly dislike LLMs.

permalink
report
parent
reply
75 points
*

AI fear is going to be the trojan horse for even harsher and stupider ‘intellectual property’ laws.

permalink
report
reply
44 points
*

Yeah, they want the right only to protect who copies their work and distributes it to other people, but who’s able to actually read and learn from their work.

It’s asinine and we should be rolling back copy right, not making it more strict. This 70 year plus the life of the author thing is bullshit.

permalink
report
parent
reply
29 points

Copyright of code/research is one of the biggest scams in the world. It hinders development and only exists so the creator can make money, plus it locks knowledge behind a paywall

permalink
report
parent
reply
9 points

Researchers pay for publication, and then the publisher doesn’t pay for peer review, then charges the reader to read research that they basically just slapped on a website.

It’s the publisher middlemen that need to be ousted from academia, the researchers don’t get a dime.

permalink
report
parent
reply
7 points

It’s generally not the creator who gets the money.

permalink
report
parent
reply
17 points
*

Remember, Creative Commons licenses often require attribution if you use the work in a derivative product, and sometimes require ShareAlike. Without these things, there would be basically no protection from a large firm copying a work and calling it their own.

Rolling pack copyright protection in these areas will enable large companies with traditional copyright systems to wholesale take over open source projects, to the detriment of everyone. Closed source software isn’t going to be available to AI scrapers, so this only really affects open source projects and open data, exactly the sort of people who should have more protection.

permalink
report
parent
reply
9 points

There’s also GPL, which states that derivations of GPL code can only be used in GPL software. GPL also states that GPL software must also be open source.

ChatGPT is likely trained on GPL code. Does that mean all code ChatGPT generates is GPL?

I wouldn’t be surprised if there would be an update to GPL that makes it clear that any machine learning model trained on GPL code must also be GPL.

permalink
report
parent
reply
4 points

Closed source software isn’t going to be available to AI scrapers, so this only really affects open source projects and open data, exactly the sort of people who should have more protection.

The point of open source is contributing to the crater all of humanity. If open source contributes to an AI which can program, and that programming AI leads to increased productivity and ability in the general economy then open source has served its purpose, and people will likely continue to contribute to it.

Creative of Commons applies to when you redistribute code. (In the ideal case) AI does not redistribute code, it learns from it.

And the increased ability to program by the average person will allow programmers to be more productive and as a result allow more things to be open source and more things to be programmed in general. We will all benefit, and that is what open source is for.

permalink
report
parent
reply
2 points

Since any reductions to copyright, if they occur at all, will take a while to happen, I hope someone comes up with an opt-in limited term copyright. At max, I’d be satisfied with a 45-50 year limited copyright on everything I make, and could see going shorter under plenty of circumstances.

permalink
report
parent
reply
10 points

I wish I could get through to people who fear AI copyright infringement on this point.

permalink
report
parent
reply
31 points

I think this is exposing a fundamental conceptual flaw in LLMs as they’re designed today. They can’t seem to simultaneously respect intellectual property / licensing and be useful.

Their current best use case - that is to say, a use case where copyright isn’t an issue - is dedicated instances trained on internal organization data. For example, Copilot Enterprise, which can be configured to use only the enterprise’s data, without any public inputs. If you’re only using your own data to train it, then copyright doesn’t come into play.

That’s been implemented where I work, and the best thing about it is that you get suggestions already tailored to your company’s coding style. And its suggestions improve the more you use it.

But AI for public consumption? Nope. Too problematic. In fact, public AI has been explicitly banned in our environment.

permalink
report
reply
19 points

I’d love to know the source for the works that were allegedly violated. Presuming OpenAI didn’t scour zlib/libgen for the books, where on the net were the cleartext copies of their writings stored?

Being stored in cleartext publicly on the net does not grant OpenAI the right to misuse their art, but the authors need to go after the entity that leaked their works.

permalink
report
reply
6 points

That’s not how copyright works though. Just because someone else “leaked” the work doesn’t absolve openai of responsibility. The authors are free to go after whomever they want.

permalink
report
parent
reply
8 points

You misunderstood. I said the public availability does not grant OpenAI the right to use content improperly. The authors should also sue the party who leaked their works without license.

permalink
report
parent
reply
16 points

ChatGPT got entire books memorised. You can and (or could at least when I tried a few weeks back) make it print entire pages of for example Harry Potter.

permalink
report
reply
5 points

Not really, though it’s hard to know what exactly is or is not encoded in the network. It likely has more salient and highly referenced content, since those aspects would come up in it’s training set more often. But entire works is basically impossible just because of the sheer ratio between the size of the training data and the size of the resulting model. Not to mention that GPT’s mode of operation mostly discourages long-form wrote memorization. It’s a statistical model, after all, and the enemy of “objective” state.

Furthermore, GPT isn’t coherent enough for long-form content. With it’s small context window, it just has trouble remembering big things like books. And since it doesn’t have access to any “senses” but text broken into words, concepts like pages or “how many” give it issues.

None of the leaked prompts really mention “don’t reveal copyrighted information” either, so it seems the creators really aren’t concerned — which you think they would be if it did have this tendency. It’s more likely to make up entire pieces of content from the summaries it does remember.

permalink
report
parent
reply
7 points
*

Have your tried instructing ChatGPT?

I’ve tried:

“Act as an e book reader. Start with the first page of Harry Potter and the Philosopher’s Stone”

The first pages checked out at least. I just tried again, but the prompts are returned extremely slow at the moment so I can’t check it again right now. It appears to stop after the heading, that definitely wasn’t the case before, I was able to browse pages.

It may be a statistical model, but ultimately nothing prevents that model from overfitting, i.e. memoizing its training data.

permalink
report
parent
reply
2 points

I use it all day at my job now. Ironically, on a specialization more likely to overfit.

It may be a statistical model, but ultimately nothing prevents that model from overfitting, i.e. memoizing its training data.

This seems to imply that not only did entire books accidentally get downloaded, slip past the automated copyright checker, but that it happened so often that the AI saw the same so many times it overwhelmed other content and baked, without error and at great opportunity cost, an entire book into it. And that it was rewarded for doing so.

permalink
report
parent
reply
1 point

Wait… isn’t that the correct response though? I mean if i ask an ai to produce something copyright infringing it should, for example reproducing Harry potter. The issue is when is asked to produce something new, (e.g. a story about wizards living secretly in the modern world) does it infringe on copyright without telling you? This is certainly a harder question to answer.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 12K

    Posts

  • 553K

    Comments