ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.
Indeed. People put that stuff up on the Internet explicitly so that it can be read. OpenAI’s AI read it during training, exactly as it was made available for.
Overfitting is a flaw in AI training that has been a problem that developers have been working on solving for quite a long time, and will continue to work on for reasons entirely divorced from copyright. An AI that simply spits out copies of its training data verbatim is a failure of an AI. Why would anyone want to spend millions of dollars and massive computing resources to replicate the functionality of a copy/paste operation?
Storing a verbatim copy and using it for commercial purposes already breaks a lot of copyright terms, even if you don’t distribute the text further.
The exceptions you’re thinking about are usually made for personal use, or for limited use, like your browser obtaining a copy of the text on a page temporarily so you can read it. The licensing on most websites doesn’t grant you any additional rights beyond that — nevermind the licensing of books and other stuff they’ve got in there.
Author’s Guild, Inc. v. Google was about something even more copy-like than this and Google won.
That lawsuit was decided mainly on the 4 fair use factors. Google was considered to meet all of them. I don’t think it’s will be the same for OpenAI for example.