Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves

39 points
9 points

can we have an “un-ampify” bot?

permalink
report
parent
reply
1 point

In the meantime, de-AMP your life.

permalink
report
parent
reply
1 point
permalink
report
parent
reply
1 point

This is the Fediverse, not Reddit. We don’t need to be bound by the old ways. We could perhaps get a plugin for the instance itself that automatically replaces AMP links with non-AMP links when the user makes the post in the first place.

permalink
report
parent
reply
24 points

In terms of hype it’s the crypto gold rush all over again, with all the same bullshit.

At least the tech is objectively useful this time around, whereas crypto adds nothing of value to the world. When the dust settles we will have spicier autocomplete, which is useful (and hundreds of useless chatbots in places they don’t belong…)

permalink
report
reply
15 points

For something that is showing to be useful, there is no way it will simply fizzle out. The exact same thing was said for the whole internet, and look where we are now.

The difference between crypto and AI, is that as you said crypto didn’t show anything tangible to the average person. AI, instead, is spreading like wildfire in software and research and being used by people even without knowing worldwide.

permalink
report
parent
reply
7 points

I’ve seen my immediate friends use chatbots to help them from passing boring yearly trainings at work, make speeches for weddings, and make rough draft lesson plans

permalink
report
parent
reply
3 points
*
Deleted by creator
permalink
report
parent
reply
9 points

Why are we in the fallacy that we assume this tech is going to be stagnant? At the current moment it does very low tier coding but the idea we are even having a conversation about a computer even having the possibility of writing code for itself (not in a machine learning way at least) was mere science fiction just a year ago.

permalink
report
parent
reply
8 points

And even in its current state it is far more useful than just generating “hello world.” I’m a professional programmer and although my workplace is currently frantically forbidding ChatGPT usage until the lawyers figure out what this all means I’m finding it invaluable for whatever projects I’m doing at home.

Not because it’s a great programmer, but because it’ll quickly hammer out a script to do whatever menial task I happen to need done at any given moment. I could do that myself but I’d have to go look up new APIs, type it out, such a chore. Instead I just tell ChatGPT “please write me a python script to go through every .xml file in a directory tree and do <whatever>” and boom, there it is. It may have a bug or two but fixing those is way faster than writing it all myself.

permalink
report
parent
reply
2 points
*
Deleted by creator
permalink
report
parent
reply
5 points

I’ve gotten it to give boiler plate for converting one library to another for certain embedded protocols for different platforms. It creates entry level code, but nothing that’s too hard to clean up or to get the gist of how a library works.

permalink
report
parent
reply
2 points

Exactly my experience as well. Seeing CoPilot suggestions often feels like magic. Far from perfect, sure, but it’s essentially a very context “aware” snippet generator. It’s essentially code completion ++.

I have the feeling that people who laugh about this and downplay it either haven’t worked with it and/or are simply stubborn and don’t want to deal with new technology. Basically the same kind of people who, when IDEs with code completion came to be, laughed at it and proclaimed only vim and emacs users to be true programmers.

permalink
report
parent
reply
1 point
Deleted by creator
permalink
report
parent
reply
12 points

By now, most of us have heard about the survey that asked AI researchers and developers to estimate the probability that advanced AI systems will cause “human extinction or similarly permanent and severe disempowerment of the human species”. Chillingly, the median response was that there was a 10% chance.

How does one rationalize going to work and pushing out tools that carry such existential risks? Often, the reason given is that these systems also carry huge potential upsides – except that these upsides are, for the most part, hallucinatory.

Ummm how about the obvious answer: most AI researchers won’t think they’re the ones working on tools that carry existential risks? Good luck overthrowing human governance using ChatGPT.

permalink
report
reply
13 points

The chance of Fossil Fuels causing human extinction carries a much higher chance, yet the news cycle is saturated with fears that a predictive language model is going to make calculators crave human flesh. Wtf is happening

permalink
report
parent
reply
7 points

Capitalism. Be afraid of this thing, not of that thing. That thing makes people lots of money.

permalink
report
parent
reply
5 points

I agree that climate change should be our main concern. The real existential risk of AI is that it will cause millions of people to not have work or be underemployed, greatly multiplying the already huge lower class. With that many people unable to take care of themselves and their family, it will make conditions ripe for all of the bad parts of humanity to take over unless we have a major shift away from the current model of capitalism. AI would be the initial spark that starts this but it will be human behavior that dooms (or elevates) humans as a result.

The AI apocalypse won’t look like Terminator, it will look like the collapse of an empire and it will happen everywhere that there isn’t sufficient social and political change all at once.

permalink
report
parent
reply
5 points

I dont disagree with you, but this is a big issue with technological advancements in general. Whether AI replaces workers or automated factories, the effects are the same. We dont need to boogeyman AI to drive policy changes that protect the majority of the population. Just frustrated with AI scares dominating the news cycle while completely missing the bigger picture.

permalink
report
parent
reply
1 point

That’s only a problem because of our current economic system. The AI isn’t the problem, the society that fails to adapt is.

permalink
report
parent
reply
4 points

I think that the results are “high” as much as 10 percent because the researcher do not want to downplay how “intelligent” their new technology is. But it’s not that intelligent as we and they all know it. There is currently 0 chance any “AI” can cause this kind of event.

permalink
report
parent
reply
2 points

the results are “high” as much as 10 percent because the researcher do not want to downplay how “intelligent” their new technology is. But it’s not that intelligent as we and they all know it. There is currently 0 chance any “AI” can cause this kind of event.

Yes, the current state is not that intelligent. But that’s also not what the expert’s estimate is about.

The estimates and worries concern a potential future, if we keep improving AI, which we do.

This is similar to being in the 1990s and saying climate change is of no concern, because the current CO2 levels are no big deal. Yeah right, but they won’t stay at that level, and then they can very well become a threat.

permalink
report
parent
reply
1 point

Not directly, no. But the tools we have already that allow to imitate voice and faces in video streams in realtime can certainly be used by bad actors to manipulate elections or worse. Things like that - especially if further refined - could be used to figuratively pour oil into already burning political fires.

permalink
report
parent
reply
1 point

The less obvious answer is Roko’s Basilisk.

permalink
report
parent
reply
12 points
*

It will, and is helping humanity in different fields already.

We need to diverge PR speech from reality. AI is already being used in pharmaceutical fields, aviation, tracking (of the air, of the ground, of the rains…), production… And there is absolutely no way you can’t say these are not helping humanity in their own way.

AI will not solve the listed issues on its own. AI as a concept is a tool that will help, but it will always end up on how well its used and with what other tools.

Also, saying AI will ruin humanity’s existence or bring “disempowerment” of the species is a completely awful view that has no way of happening just simply due to the fact that its not profitable.

permalink
report
reply
6 points

saying AI will ruin humanity’s existence or bring “disempowerment” of the species is a completely awful view that has no way of happening just simply due to the fact that its not profitable.

The economic incentives to churn out the next powerful beast as quickly as possible are obvious.

Making it safe costs extra, so that’s gonna be a neglected concern for the same reason.

We also notice the resulting AIs are being studied after they are released, with sometimes surprising emergent capabilities.

So you would be right if we would approach the topic with a rational overhead view, but we don’t.

permalink
report
parent
reply
11 points

AI bots don’t ‘hallucinate’ they just make shit up as they go along mixed with some stuff that they found in google, and tell it in a confident manner so that it looks like they know what they are talking about.

Techbro CEO’s are just creeps. They don’t believe their own bullshit, and know full well that their crap is not for the benefit of humanity, because otherwise they wouldn’t all be doomsday preppers. It all a perverse result of American worship of self-made billionaires.

See also The super-rich ‘preppers’ planning to save themselves from the apocalypse

permalink
report
reply
9 points

AI bots don’t ‘hallucinate’ they just make shit up as they go along mixed with some stuff that they found in google, and tell it in a confident manner so that it looks like they know what they are talking about.

The technical term for that is “hallucinate” though, like it or not.
https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)

permalink
report
parent
reply
9 points

“hallucination” works because everything an LLM outputs is equally true from its perspective. trying to change the word “hallucination” seems to usually lead to the implication that LLMs are lying which is not possible. they don’t currently have the capacity to lie because they don’t have intent and they don’t have a theory of mind.

permalink
report
parent
reply
6 points

Well neither can it hallucinate by the “not being able to lie” standard. To hallucinate would mean there was some other correct baseline behavior from which hallucinating is deviation.

LLM is not a mind, one shouldn’t use words like lie or hallucinate about it. That antromorphises a mechanistic algorhitm.

This is simply algorhitm producing arbitrary answers with no validity to reality checks on the results. Since neither are those times it happens to produce correct answer “not hallucinating”. It is hallucinating or not hallucinating exactly as much regardless of the correctness of the answer. Since its just doing it’s algorhitmic thing.

permalink
report
parent
reply
2 points
permalink
report
parent
reply
2 points

Do we have a AI with a theory of mind or just a AI that answers the questions in the test correctly?

Now whether or not there is a difference between those two things is more of a philosophical debate. But assuming there is a difference, I would argue it’s the latter. It has likely seen many similar examples during training (the prompts are in the article you linked, it’s not unlikely to have similar texts in a web-scraped training set) and even if not, it’s not that difficult to extrapolate those answers from the many texts it must’ve read where a character was surprised at an item missing that that character didn’t see being stolen.

permalink
report
parent
reply
1 point

Misinformation is misinformation, whether it is intentional or not. And it’s not farfetched that soon someone will launch a propaganda bot with biased training data that intentionally spreads fake news.

permalink
report
parent
reply

Technology

!technology@beehaw.org

Create post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

Community stats

  • 2.8K

    Monthly active users

  • 3.4K

    Posts

  • 78K

    Comments