Multiple artificial intelligence companies are circumventing a common web standard used by publishers to block the scraping of their content for use in generative AI systems, content licensing startup TollBit has told publishers.
Sounds like we’re all going to need to start putting the equivalent of Trap Streets in all our web content, source code, etc.
I heard someone has already had success placing nonsense in a white-on-white box of their site, later querying commercial AI to prove it was ingested w/o permission.
There probably is a way to poison AI training material and it could be handy feature for social media.
Hey member when google drove around and sopped up everyone’s wifi info and was all like, “What? We found it.” Then they threw it on the pile of data-4-sale and are still drowning in cash from?
Message received and understood! Oh, uh, here’s a couple-hundred-million fine for the uh, imposition. We’ll just leave it on the nightstand.
The Linux Mint forms got AI ddosed.
Red light? Sorry, didn’t see it. Was going too fast. Don’t worry, it’s water under the bridge. I forgive you for putting it there.