An update to Google’s privacy policy suggests that the entire public internet is fair game for it’s AI projects. If Google can read your words, assume they belong to the company now, and expect that they’re nesting somewhere in the bowels of a chatbot.
Yet they attempt to prevent any web-scraping of their services. Interesting.
I expect every AI company is doing that. At least Google is being honest about it.
𝕱𝖎𝖓𝖊! 𝓛𝓮𝓽’𝓼 🅂🄴🄴 𝕥𝕙𝕖𝕞 ѕ¢яαρ 𝔱𝔥𝔦𝔰!
Not sure how they can enforce their terms and conditions on me when I don’t use any of their services?
It’s not a TOC, you don’t have to agree to it. They’re just kind of telling you what they feel like they can get away with. I don’t see anywhere in the new terms where they outright assert that they own it though, but they just kinda say “Yo, if we can see it, we’re going to use it to train AI”.
This has been discussed elsewhere, and by people smarter than I, but chat bots are going to start learning from other chat bots and it’s going to be less and less reliable over time, no?
Like there is an internet BEFORE ChatGPT, which is about as reliable of data as one could hope to find, and then there is a post day one chatgpt, which the data is already getting polluted by random LLM gibberish. How is google’s webscraping going to know if the data it is getting is legitimate human being thoughts, or just random madeup shit from a LLM?
There was an article recently about this (too lazy to search it). It’s already starting to happen. If most of the content they train on is the internet and more internet content is created by LLMs without being tagged as AI generated content (can’t be guaranteed by all actors), then it’s inevitable. High signal training data is out the window.
likely they would limit training data to only include pre-2020 or earlier to avoid this
There are experiments with feeding LLMs output of other LLMs and the results are awful. Seems for now they can only generate sensible text if fed human output.
Right, but if they are training all new AI on shit they find online, like this comment, wouldn’t that pollute that dataset, considering I generated this comment with AI?