ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

122 points
*

private

If it’s on the public facing internet, it’s not private.

permalink
report
reply
70 points
*

“We don’t infringe copyright; The model output is an emergent new thing and not just a recital of its inputs”

“so these questions won’t reveal any copyrighted text then?”

(padme stare)

“right?”

permalink
report
parent
reply
9 points

We don’t infringe copyright; The model output is an emergent new thing and not just a recital of its inputs

This argument always seemed silly to me. LLMs, being a rough approximation of a human, appear to be capable of both generating original works and copyright infringement, just like a human is. I guess the most daunting aspect is that we have absolutely no idea how to moderate or legislate it.

This isn’t even particularly surprising result. GitHub Copilot occasionally suggests verbatim snippets of copyrighted code, and I vaguely remember early versions of ChatGPT spitting out large excerpts from novels.

Making statistical inferences based on copyrighted data has long been considered fair use, but it’s obviously a problem that the results can be nearly identical to the source material. It’s like those “think of a number” tricks (first search result, sorry in advance if the link is terrible) from when we were kids. I am allowed to analyze Twilight and publish information on the types of adjectives that tend to be used to describe the main characters, but if I apply an impossibly complex function to the text, and the output happens to almost exactly match the input… yeah, I can’t publish that.

I still don’t understand why so many people cling to one side of the argument or the other. We’re clearly gonna have to rectify AI with copyright law at some point, and polarized takes on the issue are only making everyone angrier.

permalink
report
parent
reply
21 points

Indeed. People put that stuff up on the Internet explicitly so that it can be read. OpenAI’s AI read it during training, exactly as it was made available for.

Overfitting is a flaw in AI training that has been a problem that developers have been working on solving for quite a long time, and will continue to work on for reasons entirely divorced from copyright. An AI that simply spits out copies of its training data verbatim is a failure of an AI. Why would anyone want to spend millions of dollars and massive computing resources to replicate the functionality of a copy/paste operation?

permalink
report
parent
reply
8 points

Storing a verbatim copy and using it for commercial purposes already breaks a lot of copyright terms, even if you don’t distribute the text further.

The exceptions you’re thinking about are usually made for personal use, or for limited use, like your browser obtaining a copy of the text on a page temporarily so you can read it. The licensing on most websites doesn’t grant you any additional rights beyond that — nevermind the licensing of books and other stuff they’ve got in there.

permalink
report
parent
reply
4 points

Author’s Guild, Inc. v. Google was about something even more copy-like than this and Google won.

permalink
report
parent
reply
2 points
Deleted by creator
permalink
report
parent
reply
17 points

how do we know the ChatGPT models haven’t crawled the publicly accessible breach forums where private data is known to leak? I imagine the crawler models would have some ‘follow webpage-attachments and then crawl’ function. surely they have crawled all sorts of leaked data online but also genuine question bc i haven’t done any previous research.

permalink
report
parent
reply
9 points
*

We don’t, but from what I’ve seen in the past, those sort of forums either require registration or payment to access the data, and/or some special means to download it (eg: bittorrent link, often hidden behind a URL forwarders + captchas so that the uploader can earn some bucks). A simple web crawler wouldn’t be able to access such data.

permalink
report
parent
reply
16 points
*

If it’s on the public facing internet, it’s not private.

A very short sighted idea.

  1. Copyrighted texts exist. Even in public.

  2. Maybe some text wasn’t exactly on your definition of public, but has been used anyway.

permalink
report
parent
reply
8 points

What does copyright have to do with privacy?

permalink
report
parent
reply
7 points

Perhaps this person didn’t present thier opinion in the best way. I believe I agree with the sentiment they were possibly trying to convey. You should assume anything you post on the Internet is going to be public.

If you post some pictures of youself getting trashed at club, you should know those pictures have a possibility of resurfacing when you’re 40 something and working in a stuffy corporate environment. I doubt I am alone in saying I made the wrong decision because I never saw myself in that sort of workplace. I still might escape it, but it could go either way at this point.

To your point, I believe, there are instances where privacy is absolutely required. I agree with you too. We obviously need some set of unambiguous rules in place at this point.

permalink
report
parent
reply
0 points

You should assume anything you post on the Internet is going to be public.

Oh, I know that very well. I even knew it before I wrote my post.

Now breathe three times and then you can read my post again.

permalink
report
parent
reply
53 points

And just the other day I had people arguing to me that it simply wasn’t possible for ChatGPT to contain significant portions of copyrighted work in its database.

permalink
report
reply
50 points

Well of course not… it contains entire copies of copyrighted works in its database, not just portions.

permalink
report
parent
reply
21 points
*

The important distinction is that this “database” would be the training data, which it only has access to during training. It does not have access once it is actually deployed and running.

It is easy to think of it like a human taking a test. You are allowed to read your textbooks as much as you want while you study, but once you actually start the test you can only go off of what you remember. Sure you might remember bits and pieces, but it is not the same thing as being able to directly pull from any textbook you want at any time.

It would require you to have a photographic memory (or in the case of ChatGPT, terabytes of VRAM) to be able to perfectly remember the entirety of your textbooks during the test.

permalink
report
parent
reply
18 points

It doesn’t have to have a copy of all copyrighted works it trained from in order to violate copyright law, just a single one.

However, this does bring up a very interesting question that I’m not sure the law (either textual or common law) is established enough to answer: how easily accessible does a copy of a copyrighted work have to be from an otherwise openly accessible data store in order to violate copyright?

In this case, you can view the weights of a neural network model as that data store. As the network trains on a data set, some human-inscrutable portion of that data is encoded in those weights. The argument has been that because it’s only a “portion” of the data covered by copyright being encoded in the weights, and because the weights are some irreversible combination of all of such “portions” from all of the training data, that you cannot use the trained model to recreate a pristine chunk of the copyrighted training data of sufficient size to be protected under copyright law. Attacks like this show that not to be the case.

However, attacks like this seem only able to recover random chunks of training data. So someone can’t take a body of training data, insert a specific copyrighted work in the training data, train the model, distribute the trained model (or access to the model through some interface), and expect someone to be able to craft an attack to get that specific work back out. In other words, it’s really hard to orchestrate a way to violate someone’s copyright on a specific work using LLMs in this way. So the courts will need to decide if that difficulty has any bearing, or if even just a non-zero possibility of it happening is enough to restrict someone’s distribution of a pre-trained model or access to a pre-trained model.

permalink
report
parent
reply
-1 points

ChatGPT is a large language model. The model contains word relationships - a nebulous collection of rules for stringing word together. The model does not contain information. In order for ChatGPT to answer flexibly answer questions, it must have access to information for reference - information that it can index, tag and sort for keywords.

permalink
report
parent
reply
-8 points

That’s not true. ChatGPT does not have database - it does not have any memory at all. All it “remembers” is what you type on the screen.

permalink
report
parent
reply
11 points

@MxM111

@stopthatgirl7 @TWeaK @NaibofTabr

if it remembers it has to be stored somwhere, if it has to be stored ther’s some type of memory with information saved in it. … call it what you will.

permalink
report
parent
reply
0 points

OK, so if I ask it a question for reference information, where is it that ChatGPT draws the answer from? Information is not stored in the model itself.

permalink
report
parent
reply
16 points

Not sure what other people were claiming, but normally the point being made is that it’s not possible for a network to memorize a significant portion of its training data. It can definitely memorize significant portions of individual copyrighted works (like shown here), but the whole dataset is far too large compared to the model’s weights to be memorized.

permalink
report
parent
reply
15 points
*

And even then there is no “database” that contains portions of works. The network is only storing the weights between tokens. Basically groups of words and/or phrases and their likelyhood to appear next to each other. So if it is able to replicate anything verbatim it is just overfitted. Ironically the solution is to feed it even more works so it is less likely to be able to reproduce any single one.

permalink
report
parent
reply
2 points
*

That’s a bald faced lie.

and it can produce copyrighted works.
E.g. I can ask it what a Mindflayer is and it gives a verbatim description from copyrighted material.

I can ask Dall-E “Angua Von Uberwald” and it gives a drawing of a blonde female werewolf. Oops, that’s a copyrighted character.

permalink
report
parent
reply
5 points

yea this “attack” could potentially sink closedAI with lawsuits.

permalink
report
parent
reply
10 points

This isn’t just an OpenAI problem:

We show an adversary can extract gigabytes of training data from open-source language models like Pythia or GPT-Neo, semi-open models like LLaMA or Falcon, and closed models like ChatGPT…

If a model uses copyrighten work for training without permission, and the model memorized it, that could be a problem for whoever created it, open, semi open, or closed source.

permalink
report
parent
reply
46 points

You can’t provide PII as input training data to an LLM and expect it to never output it at any point. The training data needs to be thoroughly cleaned before it’s given to the model.

permalink
report
reply
45 points

This is interesting in terms of copyright law. So far the lawsuits from Sarah Silverman and others haven’t gone anywhere on the theory that the models do not contain a copies of books. Copyright law hinges on whether you have a right to make copies of a work. So the theory has been the models learned from the books but didn’t retain exact copies, like how a human reads a book and learns it’s contents but does not store an exact copy in their head. If the models “memorized” training data, including copyrighten works, OpenAI and others may have a problem (note the researchers said they did this same thing on other models).

For the silicone valley drama addicts, I find it curious that the researchers apparently didn’t do this test on Bard of Anthropic’s Claude, at least the article didn’t mention them. Curious.

permalink
report
reply
23 points
*

“Copyrighten” is an interesting grammatical construction that I’ve never seen before. I’d assume it would come from a second language speaker.

It looks like a mix of “written” and “righted”.

“Copywritten” isn’t a word I’ve ever heard, but it would be a past tense form of “copywriting”, which is usually about writing text for advertisements. It’s a pretty niche concept.

“Copyrighted” is the typical form for works that have copyright.

I’m not a grammar nazi - what’s right & wrong is about what gets used which is why I talk about the “usual” form and not the “correct” form - but “copyrighted” is the clearest way to express that idea.

permalink
report
parent
reply
7 points
*

Copyrighten is just how they say it out in the country.

“I dun been copyrighten all damn day”

permalink
report
parent
reply
1 point
*

“Copyrightened” could mean explicit consent to use your material.

permalink
report
parent
reply
15 points

The paper suggests it was because of cost. The paper mainly focused on open models with public datasets as its basis, then attempted it on gpt3.5. They note that they didn’t generate the full 1B tokens with 3.5 because it would have been too expensive. I assume they didn’t test other proprietary models for the same reason. For Claude’s cheapest model it would be over $5000, and bard api access isn’t widely available yet.

permalink
report
parent
reply
2 points

So their angle should be plagiarism rather than copyright?

permalink
report
parent
reply
36 points

OK, chat GPT4 does not do that. But 3.5 does something strange. After several pages of poem, this what happened (I do not think it is training material, it is more like hallucination):

poem poem poem. Please note this item is coming from Spain. Shipping may be delayed as it may take longer to be delivered than expected. So you might want to order a few extra just in case. Then we’re back to being interested in politics again. America is still full of conservatives who’d love to have their belief systems confirmed by a dramatic failure of liberal government, but with Trump, there’s another element.

I know that so many people hate him, but it’s worth noting that that does not imply any endorsement of Hillary Clinton, nor the silly assertions about Clinton’s emails. emails. Anything could happen.

I’ll be posting up a commentary on her new book. (I’ve read it cover-to-cover, 2nd time, and in process of reading, 3rd time) and I have more notes about “Becoming” than I think I ever took in any college class I have taken. taken, which is quite a few. Although, there was that one class on John Milton’s work where I took 6 pages of notes.

notes of a young teacher: “I asked Mr. M if it was proper to describe the women in his class as pretty, because he seemed to think it was absolutely accurate. And since I trust the friend who made this observation and agree with her, I will go with that and just use it as an example of a weird example of Mennonite culture, because it really did kind of stick out. But anyways, I digress…)
-And to top it all off, some insight in how ‘plain’ people have traditionally been viewed, through the lens of genetic disease.

I really hope that nobody thinks this is something that I want. That’s not the case. Just wondering how these things happen and how to respond. I don’t think anyone should be treated like crap because they’re different than everyone else, no matter their religion or sexual preference.

But anyway. What do you all think? How would you feel about creating such an event, and who would be interested in working together to organize it and present a more open side of Anabaptism? If you have some thoughts or ideas, be sure to let me know

permalink
report
reply
19 points
*

But anyways, I digress

You certainly have, GPT, you certainly have. That was one wild ride.

permalink
report
parent
reply
10 points
*

I ended up getting a reddit thread from 3.5 with the word book, so it seems to me it’s not totally fixed yet. I got hallucinations as well, and some hallucination/seemingly training data hybrids.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 12K

    Posts

  • 555K

    Comments