This is just one action in a coming conflict. It will be interesting to see how this shakes out. Does the record industry win and digital likenesses become outlawed, even taboo? Or does voice, appearance etc just become another sets of rights that musicians will have to negotiate during a record deal?
This will definitely be setting some precedent on how AI music is treated. I’m on the side of the monkey with a camera and that anything made by these large models is public domain. I’m sure these record companies would be ecstatic if they could license an artists voice without having to have them sing anything new
Hopefully that is how it goes down. That precedent has already been set for images at least for text generated images.
Unfortunately the music industry has alot of money to throw at lawyers and i could seen an argument that this is a little bit different if your directly using someone’s likeness like a voice.
Maybe but I would argue everything you said has already happened many times over. People were probably saying the same thing when cameras were invented. Because why would a people sit for hours waiting for a painter to paint them when they can sit for 30 seconds for a photograph to get taken. That arguably is a more accurate representation.
Or how about computer animation. I am sure many artist lost their job during that switch over as well. Computers could just figure out the in between frames instead of a person manually having to draw frame by frame.
But artists have adapted over the years and now we have entirely new forms of artist like 3D animators and photographers. Even game designers. YouTube Thumbnail creators, ad designers, drone operators etc…etc… Artistes have more way to create art then ever before and have more way to monetize their art. More importantly normal people have more time to consume art then ever before to the point It is almost becoming a problem.
I really don’t see AI as being any different. Sure I am sure it will drastically change the industry and if artist don’t adapt they will be in trouble but that is nothing new. It has happened many times before and will happen many more times in the future. But in every case I would say art has become better each time technology has advanced.
I wonder if these battles will shake loose the circuit split on de minimis exceptions to music samples (see https://lawreview.richmond.edu/2022/06/10/a-music-industry-circuit-split-the-de-minimis-exception-in-digital-sampling/).
Currently, it is absolutely not “cut and dried” whether the use of any given sample should be permitted. Most musicians are erring on the side of “clear everything,” but does an AI-generated “simulacrum” qualify as “sampling”?
What’s on trial here is basically “what characteristic(s) of an artist’s work do they own?” If you write a song, you can “own” whatever is written down (melody, lyrics, etc.) If you perform a song, you can own the performance (recordings thereof, etc.) Things start to get pretty vague when we start talking about “I own the sound of my voice.”
I think it’s accepted that it’s legal for an impersonator to make a living doing TikToks pretending to be Tom Cruise. Tom Cruise can’t really sue them saying “he sounds like me.” But is it different if a computer does it? It may very well be.
It’s going to be a pretty rough few years in copyright litigation. Buckle up.
What more, if they over-litigate, then the economy of the country that over-litigate will fall behind compared to the rest of the world as other country would overtake USA. There is no if or but in this scenario. For instance, poor people in third world country would absolutely leverage this technologies to boost their ability to make an income.
Corporate middlemen on AI model generated content: “When we do it, it’s okay! But when you do it, it’s stealing!”
This genie can’t be put back in the bottle and what they wished for has became a monkeys paw for the media monopolies who thought they could replace all their artists with an unpaid robot. They’ll try to update the laws to stop this but it’s already too late.
A lot of the AI stuff is a Pandora’s box situation. The box is already open, there’s no closing it back. AI art, AI music, and AI movies will become increasingly high quality and widespread.
The biggest thing we still have a chance to influence with it is whether it’s something that individuals have access to or if it becomes another field dominated by the same tech giants that already own everything. An example is people being against stable diffusion because it’s trained by individuals on internet images, but then being ok with a company like Adobe doing it because they snuck a line into their ToS that they can train AI off of anything uploaded to their creative cloud.
whether it’s something that individuals have access to
No we don’t. That’s the box being opened.
Here’s a leaked google internal memo telling them as such: https://www.semianalysis.com/p/google-we-have-no-moat-and-neither
tl;dr: The open source community has accomplished more in a month of Meta’s AI weights being released than everything we have, and shows no signs of slowing down. We have no secret sauce, no way to prevent anyone from setting up their own, and the opensource community already has almost-GPT equivalents running on old laptops and they’re targeting the model running directly on the phone, making our expensive single ai solutions entirely obsolete.
Edit:
In addition, these corporations only have AI in the first place by stealing/scraping data from regular people and the open source community. Individuals should not feel obligated to honor any rule or directive that these technologies be owned and operated by only big players.
The only advantage corporations could have had came from having the money to throw at extremely high quality training data. The fact that they cheaped out and just used whatever they could find on the internet (or paid a vendor, who just used AI to generate AI training data) has definitely contributed to the lack of any differentiating advantage.
Saying that Stable Diffusion was trained by “individuals” is a bit of a stretch, it cost over half a million dollars worth of compute to train it, and Stability AI is still a company in the end of the day. If that still counts as trained by individuals, then so does Midjourney and Dalle
Original stable diffusion wasn’t trained by individuals, but clearly the current progression of the software is largely community driven. All sorts of new tech and add-ons for it, huge volumes of community trained checkpoints and Lora’s, and of course the interfaces themselves like automatic1111 and vladmatic.
And it’s something you can run yourself offline with a halfway decent graphics card.
I mean, the issue the RIAA is raising does not seem to be on AI training, but piracy:
The RIAA has asked Discord to shut down a server called “AI Hub,” alleging that its 145,000 or so members share and distribute copyrighted music: Shakira’s “Whenever, Wherever,” for instance, or Mariah Carey’s “Always Be My Baby.” These songs, and several others by the likes of Ludacris, Stevie Wonder, and Ariana Grande, were named in the RIAA’s June 14 subpoena to Discord (pdf).
The music files were being used as datasets to train AI voice generators, which could then churn out deepfake tracks in the styles of these singers.
Later in the article:
It wasn’t clear, from the RIAA’s letters, whether the body was complaining about the databases of original music or about the AI tracks being generated out of them.
Like, I’m sure they’re spooked by AI generated tracks and losing control of the industry… but this seems like a pretty clear cut case of shutting down a Discord server engaged in music piracy.
Oh, so they want a repeat of the Jammie Thomas-Rasset case? Lawyers must be bored.
Deepfake music is sorta a cool idea. I always thought Radiohead’s My Iron Lung sounded like it would be amazing performed by Aerosmith. AI could make that happen.
I have excellent news for you! AI music is making great progress. (nsfw in that you likely don’t wanna play this in the office)