Google is embedding inaudible watermarks right into its AI generated music::Audio created using Google DeepMind’s AI Lyria model will be watermarked with SynthID to let people identify its AI-generated origins after the fact.
This is the best summary I could come up with:
Audio created using Google DeepMind’s AI Lyria model, such as tracks made with YouTube’s new audio generation features, will be watermarked with SynthID to let people identify their AI-generated origins after the fact.
In a blog post, DeepMind said the watermark shouldn’t be detectable by the human ear and “doesn’t compromise the listening experience,” and added that it should still be detectable even if an audio track is compressed, sped up or down, or has extra noise added.
President Joe Biden’s executive order on artificial intelligence, for example, calls for a new set of government-led standards for watermarking AI-generated content.
According to DeepMind, SynthID’s audio implementation works by “converting the audio wave into a two-dimensional visualization that shows how the spectrum of frequencies in a sound evolves over time.” It claims the approach is “unlike anything that exists today.”
The news that Google is embedding the watermarking feature into AI-generated audio comes just a few short months after the company released SynthID in beta for images created by Imagen on Google Cloud’s Vertex AI.
The watermark is resistant to editing like cropping or resizing, although DeepMind cautioned that it’s not foolproof against “extreme image manipulations.”
The original article contains 230 words, the summary contains 195 words. Saved 15%. I’m a bot and I’m open source!
it does this by converting the audio into a 2d visualisation that shows how the spectrum of frequencies evolves in a sound over time
Old school windows media player has entered the chat
Seriously fuck off with this jargon, it doesn’t explain anything
Sounds like a bad journalist hasn’t understood the explanation. A spectrogram contains all the same data as was originally encoded. I guess all it means is that the watermark is applied in the frequency domain.
Also this isn’t new by any stretch… Aphex Twin would like a word
That’s actually an accurate description of what is happening: an audio file turned into a 2d image with the x axis being time, the y axis being frequency and color being amplitude.
People are listening to AI generated music? Someone on Bluesky put (paraphrased slightly) it best-
If they couldn’t put time into creating it I’m not going to put time into listening to it.
I think I’d rather listen to some custom AI generated music than the same royalty free music over and over again.
In both cases they’re just meant to be used in videos and stuff like that, you’re not supposed to actually listen to them.
Fun fact: Steve1989MREInfo uses all of his original music for his videos.
People are using AI tools to do crazy stuff with music right now. It’s pretty great
Human performance but AI voice: https://www.youtube.com/watch?v=gbbUWU-0GGE
Carl Wheezer covers: https://www.youtube.com/watch?v=65BrEZxZIVQ
Here is an alternative Piped link(s):
https://www.piped.video/watch?v=gbbUWU-0GGE
https://www.piped.video/watch?v=65BrEZxZIVQ
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Ok, boomer.
How’s that microwave dinner taste? Like an A for effort? Yeah, I bet.
Yikes. TIL you think music sounds good based on how much time went into making it, not how it actually sounds.
Can’t wait for you to hear something you like then pretend it’s bad when you find out it was made by AI.
That’s not really a gotcha though. They’re saying they aren’t going to actively seek out and listen to auto-generated music. If they happen to hear some and like it, that wouldn’t mean they actively sought it out and listened to it.
This assumes music is made and enjoyed in a void. It’s entirely reasonable to like music much more if it’s personal to the artist. If an AI writes a song about a very intense and human experience it will never carry the weight of the same song written by a human.
This isn’t like food, where snobs suddenly dislike something as soon as they find out it’s not expensive. Listening to music often has the listener feel a deep connection with the artist, and that connection is entirely void if an algorithm created the entire work in 2 seconds.
What if an AI writes a song about its own experience? Like how people won’t take its music seriously?
That’s a parasocial relationship and it’s not healthy, sure Taylor Swift is kinda expressing her emotions from real failed relationships but you’re not living her life and you never will. Clinging to the fantasy of being her feels good and makes her music feel special to you but it’s just fantasy.
Personally I think it would be far better if half the music was ai and people had to actually think if what their listing to actually sounds good and interesting rather than being meaningless mush pumped out by an image obsessed Scandinavian metal nerd or a pastiche of borrowed riffs thrown together by a drug frazzled brummie.
I don’t think that’s OPs point, but it’s interesting how many classic songs were written in less than 30 minutes
As someone that’s more than dabbled in making music, the best tracks I made all came out rather quickly, they still needed a lot of work to finish/polish but tracks that I would spend hours coming up with the core elements would usually be trash and end in the bin, the good stuff would just…happen.
My own feelings on the matter aside (fuck google and all that) this has been something chased after for a long time. The famous composer Raymond Scott dedicated the back end of his life trying to create a machine that did exactly this. Many famous musical creators such as Michael Jackson were fascinated by the machine and wanted to use it. The problem was is he was never “finished”. The machine worked and it could generate music, it’s immensely fascinating in my opinion.
If you want more information in podcast format check out episode 542 of 99% invisible or here https://www.thelastarchive.com/season-4/episode-one-piano-player
They go into the people who opposed Scott and why they did, and also talk about the emotion behind music and the artists, and if it would even work. Because the most fascinating part of it all was that the machine was kind of forgotten and it no longer works. Some currently famous musicians are trying to work together to restore it.
The question then is, if someone created their life’s work and modern musicians spend an immense amount of time restoring the machine, when the machine creates music does that mean no one spent time on it? I enjoy debating the philosophy behind the idea in my head, especially since I have a much more negative view when a modern version of this is done by Google.
I feel like the machine itself would be the art in that case, not necessarily what it creates. Like if someone spent a decade making a machine that could cook FLAWLESS BEEF WELLINGTON, the machine would be far more impressive and artistic than the products it made
i mean, where do you draw the line necessarily between the machine and what it creates? the machine itself is totally useless without inputs and outputs, not to say art needs utility. the beef wellington machine is only notable on its ability to conjure beef wellington, otherwise it’s just a nothing machine. which is still kind of cool, I guess, but the beef wellington machine not making beef wellington is kind of a disregard for the core part of the machine, no?
That was a great episode of 99PI. Would love the machine restored.
IIRC, It’s not so much that it made music, but that it would create loops through iteration to inspire people. He wanted it to make full busic but it was never close to that
Can it be much different from the mass-market auto-tuned pap that gets put out today?
The singers of that music actually have to use their voice to sing into a mic compared to someone on a computer typing in a prompt.
As much as I dislike modern pop music, I will definitely say they put in more work than the people who rely solely on an AI that will do all the work based on a prompt.
Lately in youtube I’m constantly been bombarded with ai garbage music passed as a normal unknown bands and it’s getting really annoying. What will happen when there’s an actual new band but everyone ignores them because you would think it’s just ai?
What will happen when there’s an actual new band but everyone ignores them because you would think it’s just ai?
Their music will speak for itself and elevate them above the AI that is making worse music.
You’re asking the wrong question. What happens when you hear something you like, then find out it’s made by AI and all of a sudden you have to pretend you never liked it?
A needle in a haystack is much harder to find if the haystack is the size of a truck. People don’t have infinite time to listen to music, and if it’s almost all the same, they’ll stop trying to find upcoming artists, ai or not.
I think probably the vast majority of up and comers in the music scene come from, not just randomly going viral (which I don’t think will be a concern with AI music anyways since it will probably sound like shit for the next 50+ years), but probably comes from just trolling around and doing local shows in venues that they know will attract that people who like the noise. I don’t think it’s very hard to distinguish between AI and people in that context.
Music snobs have been doing this for decades, pretending to like the shittiest pink Floyd b-side because the normies don’t get it and acting like Abba’s entire catalogue isn’t solid bangers because disco isn’t cool, until it was again then they’d always loved it.
It’ll be just like it always is, Pete Seagar with an axe trying to stop Bob Dylan playing an electric guitar. I remember when people hated d&b and said it wasn’t real music and all that shit now they’re all telling bullshit stories about how they were og junglist massive.
People will use ai to make really cool things and a loud portion of the population will act superior by pretending it’s bad, time will pass and when the next thing comes along all those people will point at the ai music and say ‘your new music will never be as good as real music like that’ but the people listening to atonal arithmic echolocation beats to study to or whatever the next trend is won’t pay them any attention.
This raises the question of will AI style be the next big trend? Imagine if real painters started painting oil paintings that look uncanny and surreal like an Ai generated art, weird hands, or weird eyes. Imagine if a real quartet decided to play an AI generated piece of music.
I wonder if being able to generate music will make people less interested in actually bothering to learn how to do it themselves. Having ai tool makes many things so much easier and you need to have only rudimentary understanding of the subject.
I believe it will depend on a couple different factors. Putting keywords into a generator isn’t the same as laying your hands on an instrument, being able to physically play it yourself. However, if the result is so perfect and beautiful that a person could have never possibly come up with it on their own, it might be discouraging (but I can’t really see that happening)
Yeah, like most people don’t realise but until about 1900 most piano music was played by humans, of course there were no pianists after the invention of the pianola with its perforated rolls of notes and mechanical keys.
It’s sad, drums were things you hit with a stick once but Mr Theramin ensured you never see a drummer anymore, while Mr Moog effectively ended bass and rhythm guitars with the synthesizer…
It’s a shame it would be fun to go see a four piece band performing live but that’s impossible now no one plays instruments anymore.
People are never going to stop learning to play instruments, if anything they’ll get inspired by using AI to make music and it’ll get them interested in learning to play, they’ll then use ai tools to help them learn and when they get to be truly skilled with their instrument they’ll meet up with some awesomely talented friends to form a band which creates painfully boring and indulgent branded rock.
Those are a bit of false equivalencies, because all of them still required human input to work. AI generated music can be entirely automated, just put in a prompt and tell it to generate 10 and it’ll do the rest for you. Set up enough servers and write enough prompts and you can have hundreds of distinct and unique pieces of AI music put online every minute.
Realistically, putting aside sentimental value, there isn’t a single piece of music that humans have made that an AI couldn’t make. But I hope your optimism turns out to be right :/
I sort of think this is looking at it wrong. That’s looking at music more like a product to be consumed, rather than one which is to be engaged with on the basis that it engenders human creativity and meaning. That’s sort of why this whole debate is bad at conception, imo. We shouldn’t be looking at AI as a thing we can use just to discard music from human hands, or art, or whatever, we should be looking at it as a nice tool that can potentially increase creativity by getting rid of the shit I don’t wanna deal with, or making some part of the process easier. This is less applicable to music, because you can literally just burn a CD of riffs, riffs, and more riffs (buckethead?), but for art, what if you don’t wanna do lineart and just wanna do shading? Bad example because you can actually just download lineart online, or just paint normal style, lineless or whatever. But what if you wanna do lineart without shading and making “real” or “whole” art? Bad example actually you can just sketch shit out and then post it, plenty of people do. But you get the point, anyways.
Actually, you don’t get the point because I haven’t made one. The example I always think of is klaus. They used AI, or neural networks, or deep learning or matrix calculation or whatever who cares, to automate the 3 dimensional shading of the 2d art, something that would be pretty hard to do by hand and pretty hard to automate without AI. To do it well, at least. That’s an easy example of a good use. It’s a stylistic choice, it’s very intentional, it distinguishes the work, and it does something you couldn’t otherwise just do, for this production, so it has increased the capacity of the studio. It added something and otherwise didn’t really replace anyone. It enabled the creation of an art that otherwise wouldn’t have been, and it was an intentional choice that didn’t add like bullshit, it allowed them to retain their artistic integrity. You could do this with like any piece of art, so you desired. I think this could probably be the case for music as well, just as T-pain uses autotune (or pitch correction, I forget the difference) to great effect.
Maybe but people who are good at things already can use it as a tool to be better. You can combine the skills you do have with ai for the skills you don’t have to make something you never could have before.
I like to make games and for me this means I could make my own game music. I just don’t have the skills to do that on my own and make it sound good. But with ai I could get music that matches the quality of my other work.
So we’ll just need another AI to remove the watermarks… which I think already exists.