You are viewing a single thread.
View all comments
44 points

Not the same thing, dog. Being inspired by other things is different than plagiarism.

permalink
report
reply
13 points

And yet so many of the debates around this new formation of media and creativity come down to the grey space between what is inspiration and what is plagiarism.

Even if everyone agreed with your point, and I think broadly they do, it doesn’t settle the debate.

permalink
report
parent
reply
4 points

The real problem is that ai will never ever be able to make art without using content copied from other artists which is absolutely plagiarism

permalink
report
parent
reply
2 points

But an artist cannot be inspired without content from other artists. I don’t agree to the word “copied” here either, because it is not copying when it creates something new.

permalink
report
parent
reply
10 points
*

Humans learn from other creative works, just like AI. AI can generate original content too if asked.

permalink
report
parent
reply
18 points

AI creates output from a stochastic model of its’ training data. That’s not a creative process.

permalink
report
parent
reply
5 points

What does that mean, and isn’t that still something people can employ for their creative process?

permalink
report
parent
reply
3 points

LLM AI doesn’t learn. It doesn’t conceptualise. It mimics, iterates and loops. AI cannot generate original content with LLM approaches.

permalink
report
parent
reply
2 points

Interesting take on LLMs, how are you so sure about that?

I mean I get it, current image gen models seem clearly uncreative, but at least the unrestricted versions of Bing Chat/ChatGPT leave some room for the possibility of creativity/general intelligence in future sufficiently large LLMs, at least to me.

So the question (again: to me) is not only “will LLM scale to (human level) general intelligence”, but also “will we find something better than RLHF/LLMs/etc. before?”.

I’m not sure on either, but asses roughly a 2/3 probability to the first and given the first event and AGI in reach in the next 8 years a comparatively small chance for the second event.

permalink
report
parent
reply
-2 points

We’ll soon see whether or not it’s the same thing.

Only a 50 years ago or so, some well-known philosophers off AI believed computers would write great poetry before they could ever beat a grand master at chess.

permalink
report
parent
reply
3 points

Chess can be easily formalized. Creativity can’t.

permalink
report
parent
reply
3 points

The formalization of chess can’t be practically applied. The top chess programs are all trained models that evaluate a position in a non-formal way.

They use neural nets, just like the AIs being hyped these days.

permalink
report
parent
reply
-9 points
*

This argument was settled with electronic music in the 80s/90s. Samples and remixes taken directly from other bits of music to create a new piece aren’t plagiarism.

permalink
report
parent
reply
10 points

I’m not claiming that DJs plagiarise. I’m stating that AIs are plagiarism machines.

permalink
report
parent
reply
5 points

And you’re absolutely right about that. That’s not the same thing as LLMs being incapable of constituting anything written in a novel way, but that they will readily with very little prodding regurgitate complete works verbatim is definitely a problem. That’s not a remix. That’s publishing the same track and slapping your name on it. Doing it two bars at a time doesn’t make it better.

It’s so easy to get ChatGPT, for example, to regurgitate its training data that you could do it by accident (at least until someone published it last year). But, the critics cry, you’re using ChatGPT in an unintended way. And indeed, exploiting ChatGPT to reveal its training data is a lot like lobotomizing a patient or torture victim to get them to reveal where they learned something, but that really betrays that these models don’t actually think at all. They don’t actually contribute anything of their own; they simply have such a large volume of data to reorganize that it’s (by design) impossible to divine which source is being plagiarised at any given token.

Add to that the fact that every regulatory body confronted with the question of LLM creativity has so far decided that humans, and only humans, are capable of creativity, at least so far as our ordered societies will recognize. By legal definition, ChatGPT cannot transform (term of art) a work. Only a human can do that.

It doesn’t really matter how an LLM does what it does. You don’t need to open the black box to know that it’s a plagiarism machine, because plagiarism doesn’t depend on methods (or sophisticated mental gymnastics); it depends on content. It doesn’t matter whether you intended the work to be transformative: if you repeated the work verbatim, you plagiarized it. It’s already been demonstrated that an LLM, by definition, will repeat its training data a non-zero portion of the time. In small chunks that’s indistinguishable, arguably, from the way a real mind might handle language, but in large chunks it’s always plagiarism, because an LLM does not think and cannot “remix”. A DJ can make a mashup; an AI, at least as of today, cannot. The question isn’t whether the LLM spits out training data; the question is the extent to which we’re willing to accept some amount of plagiarism in exchange for the utility of the tool.

permalink
report
parent
reply
-1 points

If AI’s are plagarism machines, then the mentioned situation must be example of DJs plagarising

permalink
report
parent
reply
-3 points

And you’re stating utter bollocks

permalink
report
parent
reply
3 points

The samples were intentionally rearranged and mixed with other content in a new and creative way.

When sampling took off, the copyright situation was sorted out and the end result is that there are ways to license samples. Some samples are produced like stock footage hat could be pirchased inexpensively, which is why a lot of songs by different artists have the same samples included. Samples of specific songs have to be licensed, so a hip hop song with a riff from an older famous song had some kind of licensing or it wouldnt be played on the radio or streaming services. They might have paid one time, or paid an artist group for access to a bunch of songs, basically the same kind of thing as covers.

Samples and covers are not plagarism if they are licensed and credit their source. Both are creating someing new, but using and crediting existing works.

AI is doing the same sampling and copying, but trying to pretend that it is somehow not sampling and copying and the companies running AI don’t want to credit the sources or license the content. That is why AI is plagarism.

permalink
report
parent
reply
1 point
*

Not even remotely the same. A producer still has to choose what to sample, and what to do with it.

An AI is just a black box with a “create” button.

permalink
report
parent
reply
-11 points

Ray parker’s Ghostbusters is inspired by huey lewis and the new’s i want a new drug. But actually it’s just blatant plagiarism. Is it okay because a human did it?

permalink
report
parent
reply
8 points

You talk like a copyright lawyer and have no idea about music.

permalink
report
parent
reply
7 points

Nope, human plagiarism is still plagiarism

permalink
report
parent
reply
-14 points

This is true but AI is not plagiarism. Claiming it is shows you know absolutely nothing about how it works

permalink
report
parent
reply
17 points

Correction: they’re plagiarism machines.

I actually took courses in ML at uni, so… Yeah…

permalink
report
parent
reply
2 points

At the ML course at uni they said verbatime that they are plagiarism machines?

Did they not explain how neural networks start generalizing concepts? Or how abstractions emerge during the training?

permalink
report
parent
reply
-7 points

So did I. Clearly you failed

permalink
report
parent
reply
8 points

Please tell me how an AI model can distinguish between “inspiration” and plagiarism then. I admit I don’t know that much about them but I was under the impression that they just spit out something that it “thinks” is the best match for the prompt based on its training data and thus could not make this distinction in order to actively avoid plagiarism.

permalink
report
parent
reply
2 points
*

Please tell me how an AI model can distinguish between “inspiration” and plagiarism then.

[…] they just spit out something that it “thinks” is the best match for the prompt based on its training data and thus could not make this distinction in order to actively avoid plagiarism.

I’m not entirely sure what the argument is here. Artists don’t scour the internet for any image that looks like their own drawings to avoid plagiarism, and often use photos or the artwork of others as reference, but that doesn’t mean they’re plagiarizing.

Plagiarism is about passing off someone else’s work as your own, and image-generation models are trained with the intent to generalize - that is, being able to generate things it’s never seen before, not just copy, which is why we’re able to create an image of an astronaut riding a horse even though that’s something the model obviously would’ve never seen, and why we’re able to teach the models new concepts with methods like textual inversion or Dreambooth.

permalink
report
parent
reply
-5 points

Go read about latent diffusion

permalink
report
parent
reply

Memes

!memes@lemmy.ml

Create post

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

Community stats

  • 11K

    Monthly active users

  • 13K

    Posts

  • 288K

    Comments