cross-posted from: https://programming.dev/post/8121843

~n (@nblr@chaos.social) writes:

This is fine…

“We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group.”

[Do Users Write More Insecure Code with AI Assistants?](https://arxiv.org/abs/2211.03622?

You are viewing a single thread.
View all comments
16 points

This isn’t even a debate lol…

Stuff like CoPilot is awesome at making code that looks right, but contains subtle wrong variable names it’s self-created, or bad algorithms.

And that’s not the big issue.

The big issue is when you get distracted for 5 mins, you come back, and you forget that you’ve been working through that block of AI generated code (which looks correct), so you forget to check the rest of it, and it makes it into the source code, before testing later, only to realise its screwed because its AI generated code.

The other big issue, is that its only a matter of time until people start to get fed up, and start feeding these systems dodgy data to de-train them and make them worse / with backdoors.

permalink
report
reply

Programming

!programming@beehaw.org

Create post

All things programming and coding related. Subcommunity of Technology.


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

Community stats

  • 70

    Monthly active users

  • 320

    Posts

  • 3.3K

    Comments