So I guess there are two paths of training data. Some company selling it explicitly, and the companies just scraping accessible data. Not that either is “good”, but at least with public data, you only have the AI company profiting.
Yep. That’s why the two things I say Automattic MUST do to make things right are about proper consent controls for Automattic’s use of data and sale to AI vendors, but the third thing is a proposed proactive defense against scrapers.
Making the web un-scrapable to prevent AI is a terrible idea that won’t even work. You’re talking about DRM against the user’s browser… to read publicly-available text… as if the LLM genie can get shoved back in its bottle.
No? That’s not what NightShade is. NightShade isn’t DRM.