147 points

Anyone surprised by this wasn’t paying attention. This is the “AI” apocalypse everyone has been wringing their hands over and dumbass executives have been salivating over. This is exactly the problem with LLMs, they produce very convincing looking content, but it’s not actually factual content. You need teams of fact checkers and editors to review all their output if you care at all about accuracy.

permalink
report
reply
45 points

As is with software developing, actually writing the stuff down is the easiest part of the work. If you already have someone fact checking and editing… why do you need AI to shit out crap just for the writing? It would be easier to gather the facts first, fact check them, then wrangle them through the AI if you don’t want to hire a writer (+ another pass for editing).

LLMs look like magic on a glance, but people thinking they are going to produce high quality content (or code for god’s sake) are delusional.

permalink
report
parent
reply
31 points
*

Yeah. I’m a programmer. Everyone has been telling me that I’m about to be out of a job any day now because the “AI” is coming for me. I’m really not worried. It’s way harder to correct bad code than it is to just throw it all away and start fresh, and I can’t even imagine how difficult it’s going to be to try to debug whatever garbage some “AI” has spewed out. If you employ a dozen programmers now, if you start using AI to generate your code you’re going to need two dozen programmers to debug and fix it’s output.

The promise with “AI” (more accurately machine learning, as this is not AI) as far as code is concerned is as a sort of smart copy and paste, where you can take a chunk of code and say “duplicate this but with these changes”, and then verify and tweak its output. As a smart refactoring tool it shows a lot of promise, but it’s not like you’re going to sit down and go “write me an app” and suddenly it’s done. Well, unless you want Hello World, and even then I’m sure it would find a way to introduce a bug or two.

permalink
report
parent
reply
14 points

unless you want Hello World, and even then I’m sure it would find a way to introduce a bug or two.

“Greetings planet”

D’oh!

permalink
report
parent
reply
9 points

Yep, I’ve had plenty of discussion about this on here before. Which was a total waste of time, as idiots don’t listen to facts. They also just keep moving the goal posts.

One developer was like they use AI to do their job all the time, so I asked them how that works. Yeah, they “just” have to throw 20% of the garbage away that’s obviously wrong when writing small scripts, then it’s great!

Or another one who said AI is the only way for them to write code, because their main issue is getting the syntax right (dyslexic). When I told them that the syntax and actually writing the code is the easiest part of my job they shot back that they don’t care, they are going to continue “looking like a miracle worker” due to having AI spit out their scripts…

And yet another one that discussed at length how you obviously can’t magically expect AI to put the right things out. So we went to the topic of code reviews and I tried to tell them: Give a real developer 1000+ line pull requests (like the AI might spit out) and there is a chance of a snowball in hell you’ll get bug free code despite reviews. So now they argued: Duh, you give the AI small bite sized Jira tickets to work on, so you can review it! And if the pull request is too long you tell the AI to make a shorter more human readable one! And then we’re back to square one: The senior developer reviewing the mess of code could just write it faster and more correct themselves.

It’s exhausting how little understanding there is about LLMs and their limitations. They produce a ton of seemingly high quality stuff, but it’s never 100% correct.

permalink
report
parent
reply
6 points

People have been saying programming would become redundant since the first 4GL languages came out in the 1980s.

Maybe it’ll actually happen some day… but I see no sign of it so far.

permalink
report
parent
reply
3 points

Fix its* output.

permalink
report
parent
reply
2 points
*
Removed by mod
permalink
report
parent
reply
26 points

I don’t think this one is even an LLM, it looks like the output of a basic article spinning script that takes an existing article and replaces random words with synonyms.

permalink
report
parent
reply
17 points

This seems like the case. One of the first stanzas:

Hunter, initially a extremely regarded highschool basketball participant in Cincinnati, achieved vital success as a ahead for the Bobcats.

Language models are text prediction machines. None of this text is predictable and it contains basic grammatical errors that even small models will almost never make.

permalink
report
parent
reply
13 points

AI doesn’t exist, but it will ruin everything anyway.

https://youtu.be/EUrOxh_0leE?si=voNBJjvvuyzb8oZk

permalink
report
parent
reply
6 points

Hah, great video. There was a reason why I put quotes around AI in my response because yes, what’s being called AI by everyone is not in fact AI, but most people have never even heard of machine learning let alone understand the difference between it and AI. I’ve seen a trend of people starting to use the term AGI to differentiate between “AI” and actual AI, but I’m not really a fan of that because I think that’s just watering down the term AI.

permalink
report
parent
reply
8 points

In the industry ML is considered a subset of AI, as are genetic algorithms and other approaches to developing “intelligence”. That’s why people tend to use AGI now to differentiate, because the fields been evolving (not that I agree with the approach either) . Honestly, you show someone even 10/15 years ago what we can do with RL, computer vision, LLMs and they’d certainly call it AI. I think the real problem is a failure to convey what these things actually are, they’re sold to the public under the term AI only to hype up the brand/business.

permalink
report
parent
reply
10 points

The danger about current AI is people giving them important tasks to do when they aren’t up to it. To put it in War Games terms, the problem is not Joshua, not even Professor Falken, but the McKittricks of the world.

permalink
report
parent
reply
6 points

if you care at all about accuracy.

There’s the problem right there. The MSN homepage ain’t exactly a pinnacle of superlative journalism.

permalink
report
parent
reply
4 points

This article wasn’t even remotely convincing, though.

permalink
report
parent
reply
51 points

Throughout his NBA profession, he performed in 67 video games over two seasons

permalink
report
reply
28 points

Dude really went wild during the steam summer sale.

permalink
report
parent
reply
13 points

Don’t we all.

permalink
report
parent
reply
16 points

Gotta teach it to add qualifying language. The above is falsifiable (even if it happens to be true).

Throughout his NBA profession, he performed in approximately 67 video games over two seasons

Throughout his NBA profession, he performed in at least 67 video games over two seasons

The second one is only technically falsifiable. It wouldn’t be practical though as you’d have to prove you investigated every video game over a 2 year period (and not necessarily contiguous). Not an easy task.

permalink
report
parent
reply
7 points

Agreed. Otherwise the content was perfect.

permalink
report
parent
reply
42 points

I really hope public opinion on AI starts to change. LLMs aren’t going to make anyone’s life easier, except in that they take jobs away once the corporate world determines that they are in a “good-enough” state – desensitizing people to this kind of stupid output is just one step on that trail.

The whole point is just to save the corporate world money. There will never, ever be a content advantage over a human author.

permalink
report
reply
18 points

The thing is LLMs are extremely useful at aiding humans. I use one all the time at work and it has made me faster at my job, but left unchecked they do really stupid shit.

permalink
report
parent
reply
2 points
*

I agree they can be useful (I’ve found intelligent code snippet autocompletion to be great), but it’s really important that the humans using the tool are very skilled and aware of the limitations of AI.

Eg, my usage generates only very, very small amounts of code (usually a few lines). I have to very carefully read those lines to make sure they are correct. It’s never generating something innovative. It simply guesses what I was going to type anyways. So it only saved me time spent typing and the AI is by no means in charge of logic. It also is wrong a lot of the time. Anyone who lets AI generate a substantial amount of code or lets it generate code you don’t understand thoroughly is both a fool and a danger.

It does save me time, especially on boilerplate and common constructs, but it’s certainly not revolutionary and it’s far too inaccurate to do the kinds of things non programmers tend to think AI can do.

permalink
report
parent
reply
9 points

It’s already made my life much easier.

The technology is amazing.

It’s just there’s a lot of stupid people using it stupidly, and people whose job it is to write happen to really like writing articles about its failures.

There’s a lot more going on in how it is being used and improving than what you are going to see unless you are actually using it yourself daily and following research papers on it.

Don’t buy into the anti-hype, as it’s misleading to the point of bordering on misinformation.

permalink
report
parent
reply
8 points

I’m going to fight the machines for the right to keep slaving away myself

And when I’m done, capitalism will give me an off day as a treat!

permalink
report
parent
reply
0 points

You’re missing the point. If you don’t have a job to “slave away” at, you don’t have the money to afford food and shelter. Any changes to that situation, if they ever come, are going to lag far behind whatever events cause a mass explosion of unemployment.

It’s not about licking a boot, it’s that we don’t want to let the boot just use something that should be a net good as extra weight as they step on us.

permalink
report
parent
reply
1 point
*

I am not going to purposefully waste human life on tasks that machines could perform or help us be faster at just because late capitalism doesn’t let me, the worker, reap the value from them.

It removes human labor

On a bigger scale we had the loom, the printing press, the steam engine the computer. Imagine if we’d refused them

I can’t see us get ensnared into some neu dark age propelled by some “i need to keep my job” status quo just because we found ourselves with a moronic economic system that makes innovations bad news for the workers it replaces

If it takes AI taking away our livelihoods to get a chance to rework this failing doctrine so be it

I’m not talking communism I’m barely hoping for an organic response to it, likely a UBI

permalink
report
parent
reply
7 points
*

As someone who works in content marketing, this is already untrue at the current quality of LLMs. It still requires a LOT of human oversight, which obviously it was not given in this example, but a good writer paired with knowledgeable use of LLMs is already significantly better than a good content writer alone.

Some examples are writing outside of a person’s subject expertise at a relatively basic level. This used to take hours or days of entirely self-directed research on a given topic, even if the ultimate article was going to be written for beginners and therefore in broad strokes. With diligent fact-checking and ChatGPT alone, the whole process, including final copy, takes maybe 4 hours.

It’s also an enormously useful research tool. Rather than poring over research journals, you can ask LLMs with academic plug-ins to give a list of studies that fit very specific criteria and link to full texts. Sometimes it misfires, of course, hence the need for a good writer still, but on average this can cut hours from journalistic and review pieces without harming (often improving) quality.

All the time writers save by having AI do legwork is then time they can instead spend improving the actual prose and content of an article, post, whatever it is. The folks I know who were hired as writers because they love writing and have incredible commitment to quality are actually happier now using AI and being more “productive” because it deals mostly with the shittiest parts of writing to a deadline and leaves the rest to the human.

permalink
report
parent
reply
7 points

It still requires a LOT of human oversight, which obviously it was not given in this example, but a good writer paired with knowledgeable use of LLMs is already significantly better than a good content writer alone.

I’m talking about future state. The goal clearly is to avoid the need of human oversight altogether. The purpose of that is saving some rich people more money. I also disagree that LLMs improve output of good writers, but even if they did, the cost to society is high.

I’d much rather just have the human author, and I just hope that saying “we don’t use AI” becomes a plus for PR due to shifting public opinion.

permalink
report
parent
reply
1 point

No, it’s not the ‘goal’.

Somehow when it comes to AI it’s humans who have the binary thinking.

It’s not going to be “either/or” anytime soon.

Collaboration between humans and ML is going to be the paradigm for the foreseeable future.

permalink
report
parent
reply

I mean… if they’re dead, they probably really suck at basketball so it’s not exactly untrue.

permalink
report
reply
15 points

Dead people really are quite useless in basketball.

permalink
report
parent
reply
8 points
*

“Hello, I’m the agent for dead player ‘Magic Bob’, I’d like to enrol him in your team of the Eagles…
Hello?”

permalink
report
parent
reply

“Sir this is the other NBA. You wanna contact the Necromatic Basketball Association.”

permalink
report
parent
reply
8 points

Oh, right.

Is there a pentagram you could point me to?

permalink
report
parent
reply
27 points

I mean, MSN is just a portal and I doubt there’s much behind it besides what domains are popular. MSN “published” this the same way Google News published articles. It sounds better to say Microsoft did it, but it’s from some news site called Race Track and it was simply scraped by MSN.

permalink
report
reply
10 points

Yeah, but that’s a key part of the problem. The media had already automated a lot of the news curation into Google News, MSN and other portals, getting people used to not paying much attention to the particular source of news. The news is now moving to generating the actual content in an automated way, rather than just the aggregation step.

permalink
report
parent
reply
1 point

But it still isn’t MSN who did it. The key part of the problem is entirely glossed over in the article.

permalink
report
parent
reply
8 points
*

“The full story is that back in 2020, MSN fired the team of human journalists responsible for vetting content published on its platform. As a result, as we reported last year, the platform ended up syndicating large numbers of sloppy articles about topics as dubious Bigfoot and mermaids, which it deleted after we pointed them out.”

MSN is not blameless for publishing bad content without supervision. And we are due for a wave of bad AI content starting now. So this problem is going to keep getting worse.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 12K

    Posts

  • 538K

    Comments