Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman shared first with Axios.

Why it matters: The move comes as regulators around the world are deciding what rules should apply to the fast-growing industry. “Trust is the currency of the AI era, yet, as it stands, our innovation account is dangerously overdrawn,” Edelman global technology chair Justin Westcott told Axios in an email. “Companies must move beyond the mere mechanics of AI to address its true cost and value — the ‘why’ and ‘for whom.’”

137 points

This implies I ever had trust in them, which I didn’t. I’m sure others would agree.

permalink
report
reply
85 points

The fact that some people are surprised by this finding really shows the disconnect between the tech community and the rest of the population.

permalink
report
parent
reply
25 points

and its getting worse. I am working on learning to write. I had never really used it for much…I heard other people going to it for literal plot points which… no. fuck you. But I had been feeding it sentences where I was iffy on the grammar. Literally just last night I asked chatgpt something, and it completely ignored the part I WAS questionable about and fed me absolute horse shit about another part of the paragraph. I honestly can’t remember what but even a first grader would be like ‘that doesn’t sound right…’

Up till that it had, at least, been useful for something that basic. Now it’s not even good for that.

permalink
report
parent
reply
12 points

Try LanguageTool. Free, has browser plugins, actually made for checking grammar.

This speaks to the kneejerk “shove everything through an AI” instead of doing some proper research, which is probably worse than just grabbing the first search result due to hallucination. No offence intended to @EdibleFriend, just observing that humans do so love to abdicate responsibility when given a chance…

permalink
report
parent
reply
9 points

I recently heard a story about a teacher who had their class have ChatGPT write their essay for them, and then had them correct the essays afterward and come back with the results. Turns out, even when it cited sources, it was wrong something like 45% of the time and oftentimes made stuff up that wasn’t in the sources it was citing or had absolutely no relevance to the source.

permalink
report
parent
reply
7 points

I guess those who just have to be on the bleeding edge of tech trust AI to some degree.

Never trusted it myself, lived through enough bubbles to see one forming and AI is a bubble.

permalink
report
parent
reply
23 points

I mean, the thing we call “AI” now-a-days is basically just a spell-checker on steroids. There’s nothing to really to trust or distrust about the tool specifically. It can be used in stupid or nefarious ways, but so can anything else.

permalink
report
reply
-7 points
*

ThE aI wIlL AttAcK HumaNs!! sKynEt!!

Edit: These “AI” can even make a decent waffles recipe and “it will eradicate humankind”… for the gods sake!!

It even isn’t AI at all, just how corps named it Is clickbait.

permalink
report
parent
reply
3 points
*

Before chatgpt was revealed, this was under the unbrella of what AI meant. I prefer to use established terms. Don’t change the terms just because you want them to mean something else.

permalink
report
parent
reply
5 points

There’s a long glorious history of things being AI until computers can do them, and then the research area is renamed to something specific to describe the limits of it.

permalink
report
parent
reply
6 points

AI is just a very generic term and always has been. It’s like saying “transportation equipment” which can be anything from roller skates to the space shuttle". Even the old checkers programs were describes as AI in the fifties.

Of course a vague term is a marketeer’s dream to exploit.

At least with self driving cars you have levels of autonomy.

permalink
report
parent
reply
16 points
*

“Trust in AI” is layperson for “believe the technology is as capable as it is promised to be”. This has nothing to do with stupidity or nefariousness.

permalink
report
parent
reply
-4 points

It’s “believe the technology is as capable as we imagined it was promised to be.”

The experts never promised Star Trek AI.

permalink
report
parent
reply
9 points

The marketers did, though.

permalink
report
parent
reply
3 points

They did promise skynet ai though. They’ve misrepresented it a great deal

permalink
report
parent
reply
33 points

Took a look and the article title is misleading. It says nothing about trust in the technology and only talks about not trusting companies collecting our data. So really nothing new.

Personally I want to use the tech more, but I get nervous that it’s going to bullshit me/tell me the wrong thing and I’ll believe it.

permalink
report
parent
reply
-5 points

basically just a spell-checker on steroids.

I cannot process this idea of downplaying this technology like this. It does not matter that it’s not true intelligence. And why would it?

If it is convincing to most people that information was learned and repeated, that’s smarter than like half of all currently living humans. And it is convincing.

permalink
report
parent
reply
9 points
*

Some people found the primitive ELIZA chatbot from 1966 convincing, but I don’t think anyone would claim it was true AI. Turing Test notwithstanding, I don’t think “convincing people who want to be convinced” should be the minimum test for artificial intelligence. It’s just a categorization glitch.

permalink
report
parent
reply
-2 points

Maybe I’m not stating my point explicitly enough but it actually is that names or goalposts aren’t very important. Cultural impact is. I think already the current AI has had a lot more impact than any chatbot from the 60s and we can only expect that to increase. This tech has rendered the turing test obsolete, which kind of speaks volumes.

permalink
report
parent
reply
4 points

I would argue that there’s plenty to distrust about it, because its accuracy leaves much to be desired (to the point where it completely makes things up fairly regularly) and because it is inherently vulnerable to biases due to the data fed to it.

Early facial recognition tech had trouble identifying between different faces of black people, people below a certain age, and women, and nobody could figure out why. Until they stepped back and took a look at the demographics of the employees of these companies. They were mostly middle-aged and older white men, and those were the people whose faces they used as the data sets for the in-house development/testing of the tech. We’ve already seen similar biases in image prompt generators where they show a preference for thin white women as being considered what an attractive woman is.

Plus, there’s the data degradation issue. Supposedly, ChatGPT isn’t fed any data from the internet at large past 2021 because the amount of AI generated content past then causes a self perpuating decline in quality.

permalink
report
parent
reply
9 points

There was any trust in (so-called) “AI” to begin with?

That’s news to me.

permalink
report
reply
115 points
*

It’s not that I don’t trust AI

I don’t trust the people in charge of the AI

The technology could benefit humanity but instead it’s going to just be another tool to make more money for a small group of people.

It will be treated the same way we did with the invention of gun powder. It will change the power structure of the world, change the titles, change the personalities but maintain the unequal distribution of wealth.

Instead this time it will be far worse for all of us.

permalink
report
reply
-18 points
*

I’m actually quite against regulation though because what it will really do is make it impossible for small startups and the open source community to build their own AIs. The large companies will just jump through whatever hoops they need to jump through and will carry on doing what they’re already doing.

permalink
report
parent
reply
18 points

Surely that would be worse without regulation? Like with predatory pricing, a big company could resort to means that smaller companies simply do not have the resources to compete against.

It’s like how today, it would be all but impossible for someone to start up a new processor company from scratch, and match up with the likes of Intel or TSMC.

permalink
report
parent
reply
-7 points

Sure but with regulation we end up with the exact same thing but no small time competitors.

permalink
report
parent
reply
2 points

I think that’s a pretty bleak perspective.

Surely one of the main aims of regulation would be to avoid concentrating benefits.

Also, I have a lot of faith in the opensource paradigm, it’s worked well thus far.

permalink
report
parent
reply
26 points

I have never trusted AI. One of the big problems is that the large language models will straight up lie to you. If you have to take the time to double check everything they tell you, then why bother using the AI in the first place?

If you use AI to generate code, often times it will be buggy and sometimes not even work at all. There is also the issue of whether or not it just spat out a piece of copyrighted code that could get you in trouble if you use it in something.

permalink
report
reply
-21 points
*

One of the big problems is that the large language models will straight up lie to you.

Um… that’s a trait AI shares with humans.

If you have to take the time to double check everything they tell you, then why bother using the AI in the first place?

You have to double check human work too. So, since you are going to double check everything anyway, it doesn’t really matter if it’s wrong?

If you use AI to generate code, often times it will be buggy

… again, exactly the same as a human. Difference is the LLM writes buggy code really fast.

Assuming you have good testing processes in place, and you better have those, AI generated code is perfectly safe. In fact it’s a lot easier to find bugs in code that you didn’t write yourself.

There is also the issue of whether or not it just spat out a piece of copyrighted code that could get you in trouble

Um - no - that’s not how copyright works. You’re thinking of patents. But human written code has the same problem.

permalink
report
parent
reply
0 points
*

I’m using Github Copilot every day just fine. It’s great for fleshing out boilerplate and other tedious things where I’d rather spend the time working out the logic instead of syntax. If you actually know how to program and don’t treat it as if it can do it all for you, it’s actually a pretty great time saver. An autocomplete on steroids basically. It integrates right into my IDE and actually types out code WITH me at the same time, like someone is sitting right beside you on a second keyboard.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 12K

    Posts

  • 553K

    Comments