The New York Times is suing OpenAI and Microsoft for copyright infringement, claiming the two companies built their AI models by “copying and using millions” of the publication’s articles and now “directly compete” with its content as a result.

As outlined in the lawsuit, the Times alleges OpenAI and Microsoft’s large language models (LLMs), which power ChatGPT and Copilot, “can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style.” This “undermine[s] and damage[s]” the Times’ relationship with readers, the outlet alleges, while also depriving it of “subscription, licensing, advertising, and affiliate revenue.”

The complaint also argues that these AI models “threaten high-quality journalism” by hurting the ability of news outlets to protect and monetize content. “Through Microsoft’s Bing Chat (recently rebranded as “Copilot”) and OpenAI’s ChatGPT, Defendants seek to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment,” the lawsuit states.

The full text of the lawsuit can be found here

15 points

What, did ChatGPT find an algorithm that writes irresponsible “both sides are equally as bad” news articles faster and better than the New York Times? I can see why that’d rattle their cage. You know, the bird cage lined with copies of the New York Times.

permalink
report
reply
5 points
*
Removed by mod
permalink
report
reply
2 points

If you use any form of copyrighted work to train your models that model should not be used for any commercial purpose should be the rule for this.

permalink
report
reply
1 point

Maybe licensing public comments isn’t that dumb after all?

CC BY-NC-SA 4.0

permalink
report
reply
-13 points

If the garbage that comes out of chatgpt can be considered legitimate competition, then the new your times sucks a journalism

permalink
report
reply
24 points

It’s not a legitimate competition, that’s the entirely point. The claim is AI models rely on stealing content and changing it slightly or not all. And if a “regular” journalist does this, they would get into trouble. Just because the entity switches to an AI company doesn’t make this business model legitimate.

A few years ago there was a big plagiarism scandal on IGN because one of their “journalists” mostly took reviews of other people, changed a few words, and published it. Obviously that’s not fine.

permalink
report
parent
reply
3 points

Is the TL;DRbot fair use?

permalink
report
parent
reply
1 point

Nobody is profiting off the tldr bot so they’re completely incomparable and you’ve shown just how out of your element you are in this discussion.

permalink
report
parent
reply
1 point

Probably not.

permalink
report
parent
reply
1 point

Yeah, probably, since it’s not being used commercially.

permalink
report
parent
reply

Technology

!technology@lemmy.ml

Create post

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

Community stats

  • 4.5K

    Monthly active users

  • 2.8K

    Posts

  • 45K

    Comments

Community moderators