And that’s the issue I in particular have. It’s a double standard and not only that, they’re using it to generate money for their own tools
It’s not the same as some kid pirating photoshop to play around with, or a couple who is curious about GOT and want to watch it without paying HBO.
This is a separate issue and I hate that this place is so reddit like that trying to talk about it gets “hurrr dur I guess you’re mad because AI and meta are just the current hate train circle jerk hurrr i form my own opinions hurr”
Like, no, I’m upset because this is a whole new topic of piracy use.
I’m not upset because I think it is totally irrelevant because training AI is not reproducing any works and it is no different than a person who reads or sees said works talking about or creating in the style of said works.
At the core, this amounts to thought policing as the final distilled issue if this is given legal precedent. It would be a massive regression of fundamental human rights with terrible long term implications. This is no different than how allowing companies to own your data and manipulate you has directly lead to a massive regression of human rights over the last 25 years. Reacting like foolish luddites to a massive change that seems novel in the moment will have far reaching consequences most people lack the fundamental logic skills to put together in their minds.
In practice, offline AI is like having most of the knowledge of the internet readily available for your own private use in a way that is custom tailored to each individual. I’m actually running large models on my own computer daily. This is not hypothetical, or hyperbole; this is empirical.