Barack Obama: “For elevator music, AI is going to work fine. Music like Bob Dylan or Stevie Wonder, that’s different”::Barack Obama has weighed in on AI’s impact on music creation in a new interview, saying, “For elevator music, AI is going to work fine”.
why do i care what obama feels about either of these
I very rarely care for what most 62 year olds have to say about the capabilities about the theoretical limits of computation.
This isn’t much different.
If the 62 year old had studied computer science and had specialized in AI, I would listen closely to them.
But I definitely not care about a politician that has no idea about technology.
I mean — he’s defending human creativity and he’s kind of right. AI can recreate variations of the things it is trained on, but it doesn’t create new paradigms.
People always says AI do create only variations but many successful TV shows are variations. I started watching sitcoms from the 70s and many things were copied/adapted in recent shows.
Yeah, also I think there is something about the human connection and communicating personal ideas and feelings that just isn’t there with AI generated art. I could see a case for an argument that a lot of music today is recorded by artists who didn’t write that music, and that they are expressing their own feelings through their performance of someone else’s creation. And is it really all that different if an AI wrote something that resonated with an artist who ultimately performed it? Which for a good chunk of pop-culture regurgitations may be completely valid. But in my opinion, the best art, communicates emotion, which an experience unique to biology, AI might be able to approximate it, and sure there’s a human prompting the AI who might genuinely have those feelings, but there’s a hollowness to it that I struggle to ignore. But maybe I’m just getting older and will be yelling at clouds before long.
…so I’ve been on a shit load of elevators, and I don’t recall a single one of them having music. For as common a trope as it is, you’d think elevator music would be more common in actual elevators.
It’s like porn, they all used to have music, and now people still make jokes about how bad it was but it’s just gone now
It’s not as common as it used to be, but I think the point was kind of that you’re not supposed to notice it?
Look into “muzak” (the style of music. Apparently it’s also a brand according to Google), and some of Brian Eno’s ambient albums like “Music for Airports” (which is definitely a bit more sparse than elevator music, which was often like smooth jazz versions of classic songs), but along similar lines.
I don’t like to think I’m that old, and I 100% remember elevator music.
Edit: was possibly thinking of “musique concrete” rather than muzak.
I love Eno’s ambient music but it’s really distinct from the cheesy musak you’re referring to.
I’ve been in a few that played music (or more recently, ads) in them. But, yeah, it’s like quicksand in that I was led to believe it would be pretty uniquitous.
I worked in an office that installed music in the bathrooms. It wasn’t there for a long time, and then they added it. An email went out at one point instructing people to stop turning off the music (someone figured out where the Sonos controls were I guess). Someone at the top had decided it was IMPERATIVE to have something to listen to other than the coworker grunting next to you.
There is no way this ages well.
I think the statement was more about the impact, which will depend on each person’s subjective experience
Personally I agree. Even if AI could produce identical work, the impact would be lessened. Art is more meaningful when you know it took time and was an expression/interpretation by another human (rather than a pattern prediction algorithm Frankenstein-ing existing work together). Combine that with the volume of AI content that’s produced, and the impact of any particular song/art piece is even more limited.
People are social, if enough people feel the same way about one thing it’ll succeed. It doesn’t matter where it came from or how it was made, like how people can still admire and appreciate nature. Or maybe the impact will be that it reduces all impacts. Every group and subgroup might be able to have their own thing.
I don’t know. I think Obama kind of nailed it. AI can create boring and mediocre elaborations just fine. But for the truly special and original? It could never.
For the new and special, humans will always be required. End of line.
At this point I want a calendar of at what date people say “AI could never” - like “AI could never explain why a joke it’s never seen before is funny” (such as March 2019) - and at what date it happens (in that case April 2022).
(That “explaining the joke” bit is actually what prompted Hinton to quit and switch to worrying about AGI sooner than expected.)
I’d be wary of betting against neural networks, especially if you only have a casual understanding of them.
I mean the limitations of LLMs are very well documented, they aren’t going to advance a whole lot more without huge leaps in computing technology. There are limits on how much context they can store for example, so you aren’t going to have AIs writing long epic stories without human intervention. And they’re fundamentally incapable of originality.
General AI is another thing altogether that we’re still very far away from.
AI can create boring and mediocre elaborations just fine.
Now.
A year ago even the boring stuff was impossible. Six months ago it’d do everything okay but fumble the details. Today? Sometimes the only reason AI art stands out is that models like central framing and eye contact.
Six months from now, I don’t know and you don’t either. You can posture about the indomitable human et cetera and ignore how it mirrors past declarations about what’s possible right now. The joke goes, “AI is whatever hasn’t been done yet.” And buddy, that category never gets bigger.
If there are rules, a deep enough network can discern them. And we are a lot less complex than we’d like to think.
Now who’s trying to predict the future?
I’ll believe it when I see it, and all of the bravado of all of the Internet strangers isn’t going to suddenly make me “believe“. Just because you can’t tell it’s a fake doesn’t mean others won’t be able to discern the difference. 
I think, it will eventually become obsolete, because we keep changing what ‘AI’ means, but current AI largely just regurgitates patterns, it doesn’t yet have a way of ‘listening’ to a song and actually judging whether it’s good or bad.
So, it may expertly regurgitate the pattern that makes up a good song, but humans spend a lot of time listening to perfect every little aspect before something becomes an excellent song, and I feel like that will be lost on the pattern regurgitating machine, if it’s forced to deviate from what a human composed.
I have seen a couple successful artists in different genres admit to using AI to help them write some of their most popular songs, and describe it’s use in the songwriting process. You hit the nail on the head with AI not being able to tell if something is good or bad. It takes a human ear for that.
AI is good at coming up with random melodies, chord progressions, and motifs, but it is not nearly as good at composing and producing as humans are, yet. AI is just going to be another instrument for musicians to use, in its current form.
Yeah, I do imagine, it won’t be just AIs either. And then, it will obviously be possible to take it to an excellent song, given enough human hours invested.
I do wonder, how useful it will actually be for that, though. Often times, it really fucks you up to try to go from good to excellent and it can be freeing to start fresh instead. In particular, ‘excellent’ does require creative ideas, which are easier for humans to generate with a fresh start.
But AI may allow us to start over fresh more readily, if it can just give us a full song when needed. Maybe it will even be possible to give it some of those creative snippets and ask it to flesh it all out. We’ll have to see…
As someone who is doing software engineering and my company jumped on AI bandwagon and got us GitHub Copilot. After using it for a while I think overall experience is actually net negative. Yes, sometimes it gets things right, sometimes it provides a correct solution, but often I can write much more concise code. Many times it provides code that looks like it is correct, but after looking in more detail it actually is wrong. So now I’m need to be in guard what code it inserts, which kills all the time that it supposedly saved me. It makes things harder because the code does look like it might work.
It is like pair programming with a complete moron that is very good at picking patterns and trying to use them in following code. So if you do a lot of copy and paste I think it will help.
I think this technology can make bad programmers suck less at programming. I think the LLM problem is that it was trained with existing works and the way it works is that its goal is to convince other human that the result was created by another one, but it isn’t capable to do any actual reasoning.
Wow, my experience has been pretty much the exact opposite of this. Copilot is amazing and I’d rather not go without it ever again
Edit: for the life of me I’ll never understand people. This comment got a bunch of downvotes and yet some douchebag who blindly accuses me of being bad at my job gets upvoted. Fuck people.
What language you program in and what kind of code you develop? Before Copilot were you frequently searching answers on stackoverflow?
Ignore them. At some point you gotta realize most people are losers trying to bring others down with them.
Do what works for you :)
Do people actually care what Obama has to say about AI? I’m just having a hard time seeing where his skillset overlaps with this topic.
Probably as much as I care about most other people’s thoughts on AI. As someone that works in AI, 99% of the people making noise about it know fuck all about it, and are probably just as qualified as Barack Obama to have an opinion on it.
What do you do exactly in AI? I’m a software engineer interested in getting involved.
I work for Amazon as a software engineer, and primarily work on a mixture of LLM’s and compositional models. I work mostly with scientists and legal entities to ensure that we are able to reduce our footprint of invalid data (i.e. anything that includes deleted customer data, anything that is blocked online, things that are blocked in specific countries, etc). It’s basically data prep for training and evaluation, alongside in-model validation for specific patterns that indicate a model contains data it shouldn’t have (and then releasing a model that doesn’t have that data within a tight ETA).
It can be interesting at times, but the genuinely interesting work seems to happen on the science side of things. They do some cool stuff, but have their own battles to fight.
There is a tad bit of difference between caring about an opinion and tolerating one. Obama’s opinions on AI are unqualified pop culture nonsense. They wouldn’t be relevant in an actual discussion that would cite relevant technical, economical and philosophical aspects of AI as points.
I know this was once said about the automobile, but I am confident in the knowledge that AI is just a passing fad
Why? It’s a tool like any other, and we’re unlikely to stop using it.
Right now there’s a lot of hype because some tech that made a marked impact of consumers was developed, and that’s likely to ease off a bit, but the actual AI and machine learning technology has been a thing for years before that hype, and will continue after the hype.
Much like voice driven digital assistants, it’s unlikely to redefine how we interact with technology, but every other way I set a short timer has been obsoleted at this point, and I’m betting that auto complete having insight into what your writing will just be the norm going forward.
Absolutely not. We need to learn the difference between intelligence and expertise. Is Obama an intelligent person? Of course. Is he allowed to have and voice an opinion? Sure, it’s a free country. Does that mean that his opinion is informed by expertise and should dictate peoples actions and therefore the direction of an industry? No.
This is the same logic that allows right wing ideologues to become legitimate sources of information. A causal interest in a topic is NOT the same as being an industry expert, and the opinions of industry experts should be weighted far heavier in our minds than people who “sound like they know what they’re talking about”.
This is the same logic that allows right wing ideologues to become legitimate sources of information. A causal interest in a topic is NOT the same as being an industry expert, and the opinions of industry experts should be weighted far heavier in our minds than people who “sound like they know what they’re talking about”.
And your logic is the same followed by government agencies when they effectively agree to regulatory capture because all of the industry experts work at this company, so why not just let the company write the rulebook? 🤔
I personally don’t believe we need “industry experts” in every new, emerging type of tech to be the sole voices considered about them because that’s how we largely arrived at the great enshitterment we’re already experiencing.
Edit: It’s really quite a baffling take (given a moment’s thought) that the big problem and/or a large problem facing America is that we aren’t cozy enough with “industry experts”. Industry practically write the policy in this country, and the only places where we have any kind of great debate (e.g. net neutrality, encryption) is where there are conflicting industry concerns.
I’m just a dude who does general labor and have lots of insights about AI just because I’m interested and smart. People tend to come to me just to hear what I have to say.
Now look at Obama. He’s all of that and much more in the eyes of a society that’s put Obama in the spotlight. He can talk about totally boring stuff and people will still respect his opinion.
But do we really need AI to generate art?
Why can’t AI be used to automate useful work nobody wants to do, instead of being a way for capital to automate skilled labor out of high-paying jobs?
Because AI is unpredictable. Which is not a big issue for art, because you can immediately see any flaws and if you can’t, it doesn’t matter.
But for actually useful work, you don’t want to find out that the AI programmer completely made up a few lines of code that are only causing problems when the airplane is flying with a 32° bank angle on a saturday with a prime number for a date.
It’s virtually guaranteed that at some point, robots and/or AI will be capable of doing almost every human job. And then there will be a time when they can do every job better than any human.
I wonder how people will react. Will they become lazy? Depressed? Just have sex all the time? Just have sex with robots all the time?
It depends if the government introduces universal basic income or not. If they do I couldn’t care less if I don’t have a job. Any reason I have a job is so that I have money. I don’t do it so I have some kind of fulfillment in my life because it isn’t a fulfilling job.
Just have sex all the time?
I’m confused about how this one tracks. Is the AI going to make me more attractive or is it just going to lower everyone else’s standards?
Like all other animals, humans evolved to be more likely to procreate. There is an argument that all of that other stuff we do is just in support of procreation. But in a way, it’s also a distraction and an impediment to procreation. It just so happens that we’ve been unable to avoid doing that other stuff so far.
But do we really need AI to generate art?
No, but we want it to. It’s probably only a matter of time untill AI can do better anything that humans can, including art. Now if there’s an option to view great art done by humans or amazing art done by AI I’ll go with the latter. It can already generate better photographs than I can capture with my camera but I couldn’t care less. Takes zero joy out of my photography hobby. I’m not doing it for money.
Why should we stifle technological progress so people can still do jobs that can be done with a machine?
If they still want to create art, nobody is stopping them. If they want to get paid, then they need to do something useful for society.
Nobody’s calling to stifle technology or progress here. We could develop AI to do anything. The question is what should that be?
There’s a distinction to be drawn between ‘things that are profitable to do and thus there isn’t any shortage of’ and ‘things that aren’t profitable and so there’s a shortage of it’ here. Today, the de facto measure of ‘is it useful for society?’ seems to be the former, and that doesn’t mean what’s useful for society, it’s what’s usefuI for people that have money to burn.
Fundamentally, there isn’t a shortage of art, or copy writers, or software developers, or the things they do- what there is, that AI promises to change, is the inconvenient need to deal with (and pay) artisans or laborers to do it. If the alternative is for AI vendors to be paid instead of working people, is it really the public interest we’re talking about, or the interests of corporate management that would rather pocket the difference in cost between paying labor vs. AI?
AI is an enabler. I have not patience for sitting and drawing for hours on end to make extremely detailed art but I’m a creative individual and would love to have the power to bring my ideas into reality. That’s what AI art does.
The problem with that, of course is it means that if I’m really serious about an idea I won’t be paying some artist(s) to make it happen. I’ll just whip open an AI art prompt (e.g.Stable Diffusion or any online AI art generators) and go to town.
It often takes a lot of iteration and messing with the prompt but eventually you’ll get what you want (90% of the time). Right now your need a decent PC to run Stable Diffusion (got 8GB of VRAM? You too can generate all the AI images you want 👍) but eventually people’s cell phones in their pockets will be even better at it.
Civitai is having a contest to make a new 404 error page graphic using AI. Go have a look at some of the entries:
https://civitai.com/collections/104601
I made one that’s supposed to be like the Scroll of Truth meme:
I made that on my own PC with my limited art skills using nothing but automatic1111 stable diffusion web UI and Krita. It took me like an hour of trying out various prompts and models before I had all the images I wanted then just a few minutes in Krita to put them into a 4-panel comic format.
If I wanted to make something like that without AI it just would never have happened.
Not that it really matters in this case, but AI art just seems inconsistent in silly ways. That girls shirt changes each frame, her hair gets more braided, and the 3rd frame has 2 left hands. I guess at first glance you don’t really notice, but it’s not hard to spot and it hurts my brain once I do.
I don’t think it’s really helpful to group a bunch of different technologies under the banner of A.I. but most people aren’t knowledgeable enough to make the distinction between software that can analyze a medical scan to tell me if I have cancer and a fancy chat bot.
The whole point of AI is that those systems aren’t fundamentally different. There is little to no human expertise that goes into those systems, it’s all self learned from the data. That’s why we are getting AIs that can do images, music, chess, Go, chatbots, etc. all in short order. None of them are build on decades worth of human expertise in music or art, but simply created by throwing data at the problem and letting the AI algorithms figure out the rest.
There is little to no human expertise that goes into those systems, it’s all self learned from the data.
The human expertise is in the data. There’s no such thing as spontaneous AI generation of expertise from nothing. If you train up an AI on information that doesn’t have it, the AI won’t learn it. In a very real way, the profit margins of AI-generated content rest wholly on its ability to consume and derive output from source material developed by unpaid experts.
Also, when the data is the output of people with biases, the AI will do the same.