![Avatar](/_next/image?url=%2Flemmy-icon-96x96.webp&w=3840&q=75)
BodilessGaze
Which is the most dangerous when they hatch?
Stop submerging your bike in hydrochloric acid or brine
Just don’t mention the Nyquist-Shannon sampling theorem. Last time I did that I barely made it out of the record shop alive
“this is not an automated response” - honestly, I think this is one of the few situations where a company could be fully justified in forcing a customer to use ChatGPT for support, since that would free up the service reps to focus on helping the sane customers.
Copyright law is full of ambiguities and gray areas, some intentional and some unintentional. The concept of “fair use” is an example of an intentional gray area, since the idea is that society as a whole will benefit from allowing people to skirt copyright law in certain circumstances, and lawmakers can’t possibly hope to enumerate every such circumstance. It then falls on courts to determine if a given circumstance falls under “fair use”. The problem is courts move very slowly when faced with a new circumstance that hasn’t been litigated before, and that’s what’s happening with AI companies training AI on copyrighted works. Once decisions have been made and stare decisis is established, then they’ll move faster. The NY Times vs OpenAI is the case to watch IMO, since that’s the biggest one challenging the idea that training AI is fair use.