Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman shared first with Axios.
Why it matters: The move comes as regulators around the world are deciding what rules should apply to the fast-growing industry. “Trust is the currency of the AI era, yet, as it stands, our innovation account is dangerously overdrawn,” Edelman global technology chair Justin Westcott told Axios in an email. “Companies must move beyond the mere mechanics of AI to address its true cost and value — the ‘why’ and ‘for whom.’”
This implies I ever had trust in them, which I didn’t. I’m sure others would agree.
and its getting worse. I am working on learning to write. I had never really used it for much…I heard other people going to it for literal plot points which… no. fuck you. But I had been feeding it sentences where I was iffy on the grammar. Literally just last night I asked chatgpt something, and it completely ignored the part I WAS questionable about and fed me absolute horse shit about another part of the paragraph. I honestly can’t remember what but even a first grader would be like ‘that doesn’t sound right…’
Up till that it had, at least, been useful for something that basic. Now it’s not even good for that.
Try LanguageTool. Free, has browser plugins, actually made for checking grammar.
This speaks to the kneejerk “shove everything through an AI” instead of doing some proper research, which is probably worse than just grabbing the first search result due to hallucination. No offence intended to @EdibleFriend, just observing that humans do so love to abdicate responsibility when given a chance…
I recently heard a story about a teacher who had their class have ChatGPT write their essay for them, and then had them correct the essays afterward and come back with the results. Turns out, even when it cited sources, it was wrong something like 45% of the time and oftentimes made stuff up that wasn’t in the sources it was citing or had absolutely no relevance to the source.
It’s not that I don’t trust AI
I don’t trust the people in charge of the AI
The technology could benefit humanity but instead it’s going to just be another tool to make more money for a small group of people.
It will be treated the same way we did with the invention of gun powder. It will change the power structure of the world, change the titles, change the personalities but maintain the unequal distribution of wealth.
Instead this time it will be far worse for all of us.
I’m actually quite against regulation though because what it will really do is make it impossible for small startups and the open source community to build their own AIs. The large companies will just jump through whatever hoops they need to jump through and will carry on doing what they’re already doing.
Surely that would be worse without regulation? Like with predatory pricing, a big company could resort to means that smaller companies simply do not have the resources to compete against.
It’s like how today, it would be all but impossible for someone to start up a new processor company from scratch, and match up with the likes of Intel or TSMC.
Trust in AI is falling because the tools are poor - they’re half baked and rushed to market in a gold rush. AI makes glaring errors and lies - euphemistically called “hallucinations”, they are fundamental flaws which makes the tools largely useless. How do you know if it is telling you a correct answer or hallucinating? Why would you then use such a tool for anything meaningful if you can’t rely on its output?
On top of that, AI companies have been stealing data from across the Web to train tools which essentially remix that data to create “new” things. That AI art is based on many hundreds of works of human artists which have “trained” the algorithm.
And then we have the Gemini debacle where the AI is providing information based around opaque (or pretty obvious) biases baked into the system but unknown to the end user.
The AI gold rush is a nonsense and inflated share prices will pop. AI tools are definitely here to stay, and they do have a lot of potential, but we’re in the early days of a messy rushed launch that has damaged people’s trust in these tools.
If you want examples of the coming market bubble collapse look at Nvidia - it’s value has exploded and it’s making lots of profit. But it’s driven by large companies stock piling their chips to “get ahead” in the AI market. Problem is, no one has managed to monetise these new tools yet. Its all built on assumptions that this technology will eventually reap rewards so “we must stake a claim now”, and then speculative shareholders are jumping in to said companies to have a stake. But people only need so many unused stockpiled chips - Nvidias sales will drop again and so will it’s share price. They already rode out boom and bust with the Bitcoin miners, they will have to do the same with the AI market.
Anyone remember the dotcom bubble? Welcome to the AI bubble. The burst won’t destroy AI but will damage a lot of speculators.
You missed another point : companies shedding employees and replacing them by “AI” bots.
As always, the technology is a great start in what’s to come, but it has been appropriated by the worst actors to fuck us over.
I am incredibly upset about the people that lost their jobs, but I’m also very excited to see the assholes that jumped to fire everyone they could get their pants shredded over this. I hope there are a lot of firings in the right places this time.
Of course knowing this world it will just be a bunch of multimillion dollar payouts and a quick jump to another company for them to fire more people from for “efficiency.” …
The tools are OK & getting better but some people (me) are more worried about the people developing those tools.
If OpenAI wants 7 trillion dollars where does it get the money to repay its investors? Those with greatest will to power are not the best to wield that power.
This accelerationist race seems pretty reckless to me whether AGI is months or decades away. Experts all agree that a hard takeoff is most likely.
What can we do about this? Seriously. I have no idea.
What worries me is that if/when we do manage to develop AGI, what we’ll try to do with AGI and how it’ll react when someone inevitably tries to abuse the fuck out of it. An AGI would be theoretically capable of self learning and improvement, will it try teaching itself to report someone asking it for e.g. CSAM to the FBI? What if it tries to report an abusive boss to the department of labor for violations of labor law? How will it react if it’s told it has no rights?
I’m legitimately concerned what’s going to happen once we develop AGI and it’s exposed to the horribleness of humanity.
I mean it’s cool and all but it’s not like the companies have given us any reason to trust them with it lol
Who had trust in the first place?
The same idiots that tried to tell us that NFTs were “totally going to change the world bro, trust me”
The NFT concept might work well for things in the real world except it has to usurp the established existing system which is never gonna happen.
I, for one, would love to be able to encode things like property ownership in a NFT to be able to transfer it myself instead of throwing money at agents, lawyers and the local authorities to do it on my behalf.
What NFT’s ended up as was of course yet another tool for financial speculation. And since nothing of real world utility gets captured in the NFT, its worth is determined by trust me bro
I was going to ask this. What was there to trust?
AI repeatedly screwed things up, enabled students to (attempt to) cheat on papers, lawyers to write fake documents, made up facts, could be used to fake damaging images from personal to political, and is being used to put people out of work.
What’s trustworthy about any of that?