82 points
*

I’m actually surprised by the comments in here. This technology is incredibly disruptive to authors, if they are correct that their intellectual property has been misused by these companies to train LLMs, then they absolutely should have the right to prevent that.

You can both be pro AI and advancement, and still respect creators intellectual rights and the right to not have all content stolen by megacorporations and used by them to create profits while decimating entire industries.

permalink
report
reply
19 points

Exactly this, this is the equivalent of me taking a movie, making a function, charge for it, and then be displeased when the creators demand an explanation about it.

permalink
report
parent
reply
12 points

It’s more like reading a book and then charging people to ask you questions about it.

AI training isn’t only for mega-corporations. We can already train our own open source models, so we should let people put up barriers that will keep out all but the ultra-wealthy.

permalink
report
parent
reply
13 points

But when the answers aren’t original thoughts but regurgitations of other peoples’ thoughts about the book, then it’s plagiarism. LLMs can’t provide original output, only variations on what people have made available (whether legally or not). The answer might not even be correct or make any sense. It’s just predictive text to a crazy degree.

When you copy someone’s work without attribution, that’s plagiarism. When your output is only possible because of someone else’s work over which they own copyright and the output replicated the copyrighted material, that’s copyright infringement.

permalink
report
parent
reply
12 points

No, it’s more like checking out every book from the library, and spending 450 years training at the speed of light, being evaluated on how well you can exactly reproduce the next part of any snippet taken from any book.

permalink
report
parent
reply
10 points

It’s more like reading a book and then charging people to ask you questions about it.

No, it’s really nothing like reading at all. Your example requires a human element. This is just the consumption of data, not reading.

permalink
report
parent
reply
1 point
*

Nah, false.

If you as a PERSON, an individual without wanting to make profits do it, then yes it would be absurd.

But, here is a corporation trying making exactly the same they have been doing with open source projects, making a real paywall out of others peoples work red hat, cough cough.

permalink
report
parent
reply
3 points
*

It’s more like buying a book, studying everything in it, then charge people for tutoring them with the knowledge you got from the book.

But now a machine is doing it, with all the books it can find…

permalink
report
parent
reply
10 points
*

One of the largest communities on Lemmy is !piracy@lemmy.dbzer0.com, so I’m not really surprised that there’s people that don’t care about copyright :)

On the other hand, if a human is allowed to write a summary of a book, why should an AI not be allowed to do the same thing? Are they going to sue cliffnotes too?

permalink
report
parent
reply
8 points

Hold on, piracy isn’t necessarily not caring about copyright, but can be (and is, in a lot of cases), about fighting against the big corporations who take advantage of historically abusive copyright laws to dominate the market and prevent small authors and companies from surviving.

These AI companies, despite being copyright violators, are much closer to the big IP monopolists than the small authors, which are victims of both groups.

permalink
report
parent
reply
4 points

about fighting against the big corporations who take advantage of historically abusive copyright laws to dominate the market and prevent small authors and companies from surviving.

If people were really that principled, they’d totally boycott the big corporations and only consume media from the small authors and companies.

permalink
report
parent
reply
6 points

My main point is that if people don’t want their content used for training LLMs they should absolutely have the option to not have their content used to train LLMs.

Training databases should be ethically sourced from opt in programs, that some companies are already doing, such as Adobe.

permalink
report
parent
reply
2 points

My main point is that if people don’t want their content used for training LLMs they should absolutely have the option to not have their content used to train LLMs.

How can one prove that their content is being used to train the LLM though, rather than something that’s derivative of their content like reviews of it?

permalink
report
parent
reply
5 points
*

if a human is allowed to write a summary of a book, why should an AI not be allowed to do the same thing?

Said human presumably would have to purchase or borrow a book in order to read it, which earns the author some percentage of the profits. If giant corps want to use the books to train their LLMs, it’s only fair that they’d have to negotiate with the publishers much like libraries do.

permalink
report
parent
reply
3 points

Said human presumably would have to purchase or lend a book in order to read it

Borrowing a book from a library doesn’t earn the author any more profits for each time it’s lended out, I don’t think. My local library just buys books off Amazon.

What if I read the CliffNotes and make my own summary based on that? What if I read someone else’s summary and reword it? I think that’s more like what ChatGPT is doing - I really don’t think it’s being fed entire copyrighted books as training data. There’s no actual proof LibGen or ZLib is being used to train it.

permalink
report
parent
reply
6 points

Eventually the bad actors are going to lose a lot of money trying to litigate their theft of people’s art. It was always going to end up in the legal system. These apps are even programmed to scrub watermarks and signatures. It’s deliberate theft.

permalink
report
parent
reply
4 points

Yes, thank you for this comment.

permalink
report
parent
reply
3 points
*

I agree. This technology doesn’t exist in a vacuum. This isn’t some utopia where a Human artist can just solely focus on creating their art and not worry about financial gain because their survival needs are always guaranteed to be met or whatever.

permalink
report
parent
reply
2 points

I’m pro AI and advancement, and anti-IP.

I hope to see AI disrupt our capitalistic value of ownership further.

permalink
report
parent
reply
39 points

‘Reading my book infringes on my copyright.’ say confused writers.

permalink
report
reply
58 points

This is a strawman.

You cannot act as though feeding LLMs data is remotely comparable to reading.

permalink
report
parent
reply
8 points

Why not?

permalink
report
parent
reply
15 points

Because reading is an inherently human activity.

An LLM consuming data from a training model is not.

permalink
report
parent
reply
4 points

Because the LLM is also outputting the copyrighted material.

permalink
report
parent
reply
18 points

This is what I never understood about the whole training on AI thing.

When a human creates an artwork, they don’t do it out of a vacuum. They’ve had a lifetime of inspiration from artwork they’ve discovered that inspires then to create something wholly new. AI does the same thing

permalink
report
parent
reply
30 points

The AIs we are talking about are large language models. They take human work as input and produce facsimiles. They are owned by individuals or companies that have no permission to exploit in this way intellectual property tied to other people’s livelihoods to copy them.

LLMs are not sentient, they don’t have inspiration, they are not creative and therefore do not create in the sense an artist would. They are an elaborate mathematical equation.

“Training” an AI has nothing to do with training an actual living being. It’s just tuning: adjusting an algorithm incrementally until the operator is satisfied with the result. I think it’s defendable to amount this form of extraction to plagiarism.

permalink
report
parent
reply
8 points

Most likely, if you ask ChatGPT to summarize a famous book, it does not need to have ever trained on the book itself. The easiest way for an LLM to create a summary of something is to base its summary off existing summaries created by humans. If it’s ruled in court that ChatGPT is infringing on the copyright of a book’s author only by repeating information it acquired from other summaries created by humans, what implications does that have for the humans who wrote the other summaries?

permalink
report
parent
reply
6 points

Intellectual property in general is a ridiculous concept.

permalink
report
parent
reply
5 points

I partially agree with you, but I think you’re missing the end goal of Facebook et al.

As HughJanus pointed out it’s not really any different than a person reading a book and by that reasoning using copyrighted material to train models like these falls well within the existing framework of “fair use”.

However, that depends entirely on “the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes.” I agree completely with you that OpenAI’s products/business (the most blatant violator) does easily violate “fair use” due to that clause. However they’re doing it, at least partially, to “force the issue” on the open question of “how much can public information be privatized?” with the goal of further privatizing and increasing commercial applications of raw data.

As you pointed out LLMs can only create facsimiles and not the original work, and by that logic they can’t exactly replicate the inputs either.

No I don’t think artists can claim that they own any and all “cheap facsimiles” of their works, but by that same reasoning none of these models produced should be allowed to be the enforceable property of any individual/company either.

For further reading check out:

  • Kelly v. Arriba Soft Corporation on why “thumbnails” (and by extension LLMs, “eigen-images”, etc.) are inherently transformatve and constitute fair use.
  • Bridgeport Music, Inc. v. Dimension Films for the negative impacts that ruling has had and how it still doesn’t protect the artists from their stuff being used for training and LLM.
  • “Variational auto-encoders” for understanding on how the latest LLMs actually do achieve a significant amount of “originality” and I would argue are able to be minimally creative.
permalink
report
parent
reply
6 points

Yeah, people are just trying to cash in on AI by suing companies that train AI.

permalink
report
parent
reply
21 points

It’s the AI companies cashing in with other people’s work so far.

permalink
report
parent
reply
5 points

AIs are trained for the equivalent of thousands of human lifetimes (if not more). There’s no precedent for anything like this.

permalink
report
parent
reply
10 points

Dude, tell me, why do u think they have being doing this only with books and art but no music?

Thats because music really has people protecting their assets. U can have ur opinion about it, but that’s the only reason they haven’t ABUSED companies and people’s work in music.

It’s not reading, it’s the equivalent of me taking a movie, making a function, charge for it, and then be displeased when the creators demand an explanation.

permalink
report
parent
reply
5 points
*

There are a few reasons why music models haven’t exploded the way that large-language models and generative image models have. Maybe the strength of the copyright-holders is part of it, but I think that the technical issues are a bigger obstacle right now.

  • Generative models are extremely data-inefficient. The Internet is loaded with text and images, but there isn’t as much music.

  • Language and vision are the two problems that machine learning researchers have been obsessed with for decades. They built up “good” datasets for these problems and “good” benchmarks for models. They also did a lot of work on figuring out how to encode these types of data to make them easier for machine learning models. (I’m particularly thinking of all of the research done on word embeddings, which are still pivotal to large language models.)

Even still, there are fairly impressive models for generative music.

permalink
report
parent
reply
5 points

Example of music generation: MusicLM. The abstract mentions having to create a new dataset to get these results.

permalink
report
parent
reply
4 points

What is the meaning of “making a function” in your sentence?

permalink
report
parent
reply
1 point

Like showing in the theater.

Seems like my grammar’s still shit, sry

permalink
report
parent
reply
2 points

It’s probably much harder with music.

permalink
report
parent
reply
33 points

In other news, old man yells at clouds.

permalink
report
reply
13 points

Yeah I’ll be very surprised if this goes anywhere, are they going to sue cliffsnotes as well?

permalink
report
parent
reply

“If a user prompts ChatGPT to summarize a copyrighted book, it will do so,” the suit claims.

https://en.wikipedia.org/wiki/The_Bedwetter

Time to add wikipedia to the suit!

permalink
report
reply
28 points
*

People keep taking issue with this articles use of “summarizing” and linking to wikipedia… Summaries of copyrighted work are obviously not illegal.

This article is oversimplified and does a crummy job of explaining the problem. Ars Technica does a much better job explaining.

The fact that the ai can summarize these works in detail is proof that they were trained using copyrighted material without permission, (which is not fair use) Sarah Silverman is obviously not going to be hurt financially by this, but there are hundreds of thousands of authors who definitely will be affected. They have every right to sue.

permalink
report
reply
11 points

Why does “fair use” even fall into it? I’m not familiar with their specific license, but the general definition of copyright is:

A copyright is a type of intellectual property that gives its owner the exclusive right to copy, distribute, adapt, display, and perform a creative work, usually for a limited time.

Nothing was copied, or distributed (in a form that anybody can consider “The Work”), or displayed, or performed. The only possible legal argument they have is adapting as a derivative work. And anybody who is familiar with how an LLM works knows that the form that results from reading in content is completely different from the source.

LLMs/LDMs are not taking in billions of books and putting them into a database. It is a very lossy process. Out of all of the billions of images trained from the Stable Diffusion database, the resulting model is 4 GBs. There is no universe where you can store billions of images into a mere 4 GBs. Stable Diffusion cannot and will not, pixel-by-pixel, reproduce a Van Gogh. It can make something that kind of looks like a Van Gogh, but styles are not copyrightable.

The same applies to an LLM like ChatGPT. It cannot reproduce entire books, or anywhere close to that. If you ask it to recreate Page 25 of Silverman’s book, it can’t do it. If it doesn’t even contain a minor portion of the original material, it can’t even be considered a derivative work.

They don’t have a case. They have a lot of publicity and noise, but they will lose to inevitability.

permalink
report
parent
reply
12 points
*

You make a lot of excellent points, but I think the main issue of contention is just using copyrighted work to train generative AI without the author’s permission regardless.

If they did ask permission, there would be no problem. But an author or artist should be given the choice if their work is going to be used to train an AI.

permalink
report
parent
reply
6 points

You make a lot of excellent points, but I think the main issue of contention is just using copyrighted work to train generative AI without the author’s permission regardless.

If I read a book at the library… and come up with an amazing revolutionary product. Then make a company and go on to make billions of dollar per year. The original book Author has no claim to my income.

There’s no contention. This is just a money grab. Copyright doesn’t disallow people from consuming the content as they please. It simply disallows someone to pass off the original works as your own when it’s not.

permalink
report
parent
reply
4 points

I think the main issue of contention is just using copyrighted work to train generative AI without the author’s permission regardless.

You must define that in legal terms. This is a lawsuit, after all. It’s not illegal to “just use” copyrighted work. The words “generative AI” are not in a federal or state bill anywhere in the US.

They can have an “issue of contention” all they want, but if they can’t prove anything legally, they have nothing.

permalink
report
parent
reply

Technology

!technology@beehaw.org

Create post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

Community stats

  • 2.8K

    Monthly active users

  • 3.4K

    Posts

  • 78K

    Comments