“Publicly available data” - I wonder if that includes Disney’s catalogue? Or Nintendo’s IP? I think they are veeery selective about their “Publicly available data”, it also implies the only requirement for such training data is that it is publicly available, which almost every piece of media ever? How an AI model isn’t public domain by default baffles me.
The problem is that if copyrighted works are used, you could generate a copyrighted artwork that would be made into public domain, stripping its protection. I would love this approach, the problem is the lobbyists don’t agree with me.
Not necessarily, if a model is public domain, there could still be a lot of proprietary elements used in interpreting that model and actually running it. If you own the hardware and generate something using AI, I’d say the copyright goes to you. You use AI as the brush to paint your painting and the painting belongs to you, but if a company allows you to use their canvas and their painting tools, it should go to them.
If you rent a brush to paint with, is the painting not yours? If you rent a musical instrument to record an original song with, is the song not yours?
The existing legal precedence in most places is that most use of ML doesn’t count as human expression and doesn’t have copyright protection. You have to have significant control over the creation of the output to have copyright (the easiest workaround is simply manually modifying the ML output and then only releasing the modified version)
I think that if I paint with my own brush a mario artwork that isn’t to Nintendo’s standart, they have the legal power to take it down from wherever I upload
There is a rumor that OpenAI downloaded the entirety of LibGen to train their AI models. No definite proof yet, but it seems very likely.
https://torrentfreak.com/authors-accuse-openai-of-using-pirate-sites-to-train-chatgpt-230630/
Great articles, first is one of the best I’ve read about the implications of fair use. I argue that because of the broadness of human knowledge that is interpreted through these models, everyone is entitled to have unrestricted access to them (not the servers or algorithms used, the models). I’ll dub it “the library of the digital age” argument.
It’s almost impossible to audit what data got into an AI model. Until this is true companies could scrape and use whatever they like and no one would be the wiser to what data got used or misused in the process. That makes it hard to make such companies accountable to what and how they are using.
Then it needs to be on companies to prove their audit trail, and until then require all development to be open source
That would be amazing. But it won’t happen any time soon if ever… I mean - just think about all that investment in GPU compute and the need to realize good profit margins. Until there are laws and legislation that requires AI companies to open their data pipelines and make public all details about the data sources I don’t think much would happen. They’ll just keep feeding any data they get their hands on and nothing can stop that today.
Maybe not today and maybe not every AI but maybe some AI in the near future will have it’s data sources made explainable. There are a lot of applications where deploying AI would be an improvement over what we have. One example I can bring up is in-silico toxicology experiments. There’s been a huge desire to replace as many in-vivo experiments with in-vitro or even better in-silico to minimize the number of live animals tested on. Both for ethical reasons and cost savings. AI has been proposed as a new tool to accomplish this but it’s not there yet. One of the biggest challenges to overcome is making the AI models used in-silico to be explainable, because we can not regulate effectively what we can not explain. Regardless there is a profits incentive for AI developers to make at least some AI explainable. It’s just not where the big money is. To which end that will apply to all AI I haven’t the slightest idea. I can’t imagine OpenAI would do anything to expose their data.
Until there are laws and legislation that requires AI companies to open their data pipelines and make public all details about the data sources I don’t think much would happen.
I don’t expect those laws to ever happen. They don’t benefit large corporations so there’s no reason those laws would ever be prioritized or considered by lawmakers, sadly.
Ask a man his salary. Do it. How else are you supposed to learn who is getting underpaid? The only way to rectify that problem is to learn about it in the first place.
The NLRB ensures that discussion of wages is a protected right.
Talk about your wages.
I think context is important here. Asking a co-worker their salary is fine. Asking about the salary of someone you’re on a date with is not fine.
Exactly.
You should have asked them for their W-2 before agreeing to meet.
I love of it isn’t just a image of the open ai logo but also a sad person besides it
Oh that is not just some person, that’s the CTO of "Open"AI when asked, if YT videos were used to train Sora.
Lying MF, unbelievable that’s the best they thought of.
I’m sorry, but we’ve made an internal decision not to reveal our proprietary methodology at this time.
There, now it’s not a lie (hurr durr I’m only the CTO how would I know whether a tiny startup like YOUTUBE was one of our sources)
Here is an alternative Piped link(s):
https://piped.video/mAUpxN-EIgU?feature=shared&t=270
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
What’s wrong with her face?
It’s this face: https://www.compdermcenter.com/wp-content/uploads/2016/09/vanheusen_5BSQnoz.jpg
She was asked about openai using copyrighted material for training data and literally made that face. Only thing more perfect would’ve been if she tugged at her collar while doing the face.