Two authors sued OpenAI, accusing the company of violating copyright law. They say OpenAI used their work to train ChatGPT without their consent.
It’s hard to guess what the internal motivation is for these particular people.
Right now it’s hard to know who is disseminating AI-generated material. Some people are explicit when they post it but others aren’t. The AI companies are easily identified and there’s at least the perception that regulating them can solve the problem, of copyright infringement at the source. I doubt that’s true. More and more actors are able to train AI models and some of them aren’t even under US jurisdiction.
I predict that we’ll eventually have people vying to get their work used as training data. Think about what that means. If you write something and an AI is trained on it, the AI considers it “true”. Going forward when people send prompts to that model it will return a response based on what it considers “true”. Clever people can and will use that to influence public opinion. Consider how effective it’s been to manipulate public thought with existing information technologies. Now imagine large segments of the population relying on AIs as trusted advisors for their daily lives and how effective it would be to influence the training of those AIs.