Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves
By now, most of us have heard about the survey that asked AI researchers and developers to estimate the probability that advanced AI systems will cause âhuman extinction or similarly permanent and severe disempowerment of the human speciesâ. Chillingly, the median response was that there was a 10% chance.
How does one rationalize going to work and pushing out tools that carry such existential risks? Often, the reason given is that these systems also carry huge potential upsides â except that these upsides are, for the most part, hallucinatory.
Ummm how about the obvious answer: most AI researchers wonât think theyâre the ones working on tools that carry existential risks? Good luck overthrowing human governance using ChatGPT.
I think that the results are âhighâ as much as 10 percent because the researcher do not want to downplay how âintelligentâ their new technology is. But itâs not that intelligent as we and they all know it. There is currently 0 chance any âAIâ can cause this kind of event.
the results are âhighâ as much as 10 percent because the researcher do not want to downplay how âintelligentâ their new technology is. But itâs not that intelligent as we and they all know it. There is currently 0 chance any âAIâ can cause this kind of event.
Yes, the current state is not that intelligent. But thatâs also not what the expertâs estimate is about.
The estimates and worries concern a potential future, if we keep improving AI, which we do.
This is similar to being in the 1990s and saying climate change is of no concern, because the current CO2 levels are no big deal. Yeah right, but they wonât stay at that level, and then they can very well become a threat.
Not directly, no. But the tools we have already that allow to imitate voice and faces in video streams in realtime can certainly be used by bad actors to manipulate elections or worse. Things like that - especially if further refined - could be used to figuratively pour oil into already burning political fires.
The chance of Fossil Fuels causing human extinction carries a much higher chance, yet the news cycle is saturated with fears that a predictive language model is going to make calculators crave human flesh. Wtf is happening
I agree that climate change should be our main concern. The real existential risk of AI is that it will cause millions of people to not have work or be underemployed, greatly multiplying the already huge lower class. With that many people unable to take care of themselves and their family, it will make conditions ripe for all of the bad parts of humanity to take over unless we have a major shift away from the current model of capitalism. AI would be the initial spark that starts this but it will be human behavior that dooms (or elevates) humans as a result.
The AI apocalypse wonât look like Terminator, it will look like the collapse of an empire and it will happen everywhere that there isnât sufficient social and political change all at once.
I dont disagree with you, but this is a big issue with technological advancements in general. Whether AI replaces workers or automated factories, the effects are the same. We dont need to boogeyman AI to drive policy changes that protect the majority of the population. Just frustrated with AI scares dominating the news cycle while completely missing the bigger picture.