You are viewing a single thread.
View all comments
140 points

“I gave an LLM a wildly oversimplified version of a complex human task and it did pretty well”

For how long will we be forced to endure different versions of the same article?

The study said 86.66% of the generated software systems were “executed flawlessly.”

Like I said yesterday, in a post celebrating how ChatGPT can do medical questions with less than 80% accuracy, that is trash. A company with absolute shit code still has virtually all of it “execute flawlessly.” Whether or not code executes it not the bar by which we judge it.

Even if it were to hit 100%, which it does not, there’s so much more to making things than this obviously oversimplified simulation of a tech company. Real engineering involves getting people in a room, managing stakeholders, navigating conflicting desires from different stakeholders, getting to know the human beings who need a problem solved, and so on.

LLMs are not capable of this kind of meaningful collaboration, despite all this hype.

permalink
report
reply
32 points

AI regularly hallucinates API endpoints that don’t exist, functions that aren’t part of that language, libraries that don’t exist. There’s no fucking way it did any of this bullshit. Like, yeah - it can probably do a mean autocomplete, but this is being pushed so hard because they want to drive wages down even harder. They want know-nothing middle-managers to point to this article and say “I can replace you with AI, get to work!”…that’s the only purpose of this crap.

permalink
report
parent
reply
12 points
*

I think there is less of a conspiracy, and it’s just pushing investment. These AI articles sound exactly like when the internet was new and most people only had a cursory experience with it and people were pumping any company if they just said the word internet.

Now that “Blockchain” has been beaten to death, they need a new hype word to drive mindless investment.

permalink
report
parent
reply
20 points
*

Thank you for writing this so I only have to upvore upvote you.

Edit: What the difference between one key can be

permalink
report
parent
reply
22 points

I only have to upvore you

holy music stops

permalink
report
parent
reply
17 points

I don’t know what an upvore is and I don’t want to know.

permalink
report
parent
reply
5 points

Is it… vore but… upwards? So… vomiting people? Nah, I don’t want to know either.

permalink
report
parent
reply
11 points

But they could replace CEOs from what I can tell.

permalink
report
parent
reply
7 points

A monkey could replace CEOs.

permalink
report
parent
reply
3 points

Please, PLEASE do not use Elon Musk, Bezos and other such people as the training model

permalink
report
parent
reply
5 points
*

So what you’re saying is that 86.66% of the time, it works every time.

permalink
report
parent
reply
5 points

80% accuracy, that is trash

More than 80% of most codebases is boilerplate stuff: including the right files for dependencies, declaring functions with the right number of parameters using the right syntax, handling basic easily anticipated errors, etc. Sometimes there’s even more boilerplate, like when you’re iterating over a list, or waiting for input and handling it.

The rest of the stuff is why programming is a highly paid job. Even a junior developer is going to be much better than an LLM at this stuff because at least they understand it’s hard, and at least often know when they should ask for help because they’re in over their heads. An LLM will “confidently” just spew out plausible bullshit and declare the job done.

Because an LLM won’t ask for help, won’t ask for clarifications, and can’t understand that it might have made a mistake, you’re going to need your highly paid programmers to go in and figure out what the LLM did and why it’s wrong.

Even perfecting self-driving is going to be easier than a truly complex software engineering project. At least with self-driving, the constraints are going to be limited because you’re dealing with the real world. The job is also always the same – navigate from A to B. In the software world you’re only limited by the limits of math, and math isn’t very limiting.

I have no doubt that LLMs and generative AI will change the job of being a software engineer / programmer. But, fundamentally programming comes down to actually understanding the problem, and while LLMs can pretend they understand things, they’re really just like well-trained parrots who know what sounds to make in specific situations, but with no actual understanding behind it.

permalink
report
parent
reply
3 points

But did you hear that it uses more water than regular data centers?

permalink
report
parent
reply
1 point

LLMs are not capable of this kind of meaningful collaboration

Which is why they’re a tool for professionals to amplify their workload, not a replacement for them.

permalink
report
parent
reply
2 points

But C-suites will read articles like this and fire their development teams “because AI can do it.” I have my popcorn ready for the day it begins.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 12K

    Posts

  • 553K

    Comments