ChatGPT Isn’t as Good at Coding as We Thought::undefined
I’ve experimented a bit with chatGPT, asking it to create some fairly simple code snippets to interact with a new API I was messing with, and it straight up confabulated methods for the API based on extant methods from similar APIs. It was all very convincing, but if there’s no way of knowing that it’s just making things up, it’s literally worse than useless.
I’ve had similar experiences with it telling me to call functions of third party libs that don’t exist. When you tell it “That function X does not exist” it says “I’m sorry, your right function X does not exist on library A. here is another example using function Y” then function Y doesn’t exist either.
I have found it useful in a limited scope, but I have found co-pilot to be much more of a daily time saver.
So? Those mistakes will come up in testing, and you can easily fix them (either yourself, or by asking the AI to do it, whichever is faster).
I’ve successfully used it to write code for APIs that did not even exist at all a couple years ago when ChatGPT’s model was trained. It doesn’t need to know the API to generate working code - you just need to tell it what APIs are available as part of your conversation.
Except that in code, you can write unit tests and have checks that it absolutely has to get precisely correct.
If you have to write the code and tests yourself… That’s just normal coding then
You don’t, you get it to write both the code and the tests. And you read both of them yourself. And you run them in a debugger to verify they do what you expect.
Yeah, that’s half the work of “normal coding” but it’s also half the work. Which is a pretty awesome boost to productivity.
But where it really boosts your productivity is with APIs that you aren’t very familiar with. ChatGPT is a hell of a lot better than Google for simple “what API can I use for X” questions.