You are viewing a single thread.
View all comments View context
1 point

This couples intentions to the code which in my example would be dynamic.

That’s going to be a bad time.

My point is that the conventions that used to be good for the past 50 years of development are likely going to change as tooling does.

Programming is effectively about managing complexity.

Yes, the abstraction of a development language being the layer at which you encode intention rather than in comments is better when humans are reading and writing the code itself.

But how many projects have historically run into problems when a decade earlier they chose a language that years later is stagnating in tooling or integrations versus another pick?

Imagine if the development work had been done exclusively in pseudocode and comments guiding generative AI writing in language A. How much easier might porting everything to language B end up being?

Language agnostic development may be quite viable within a year or so.

And just as you could write software in binary, letting a compiler do that and working with an abstracted layer is more valuable in time and cost.

I’m saying that the language is becoming something which software can effectively abstract, so moving the focus yet another layer up will likely be more valuable than clinging to increasingly obsolete paradigms.

permalink
report
parent
reply
1 point

Language agnostic development may be quite viable within a year or so.

I doubt that very much, GPT4 (to my knowledge still the best LLM) is far from being there. As (my) initial hype is overcome, I have basically stopped using it because I have to “help” it too much (and it got really worse over time…) so that I spent more time to get any usable results from it, instead of just writing the goddamn code myself. There has to be a very large step in progress, that this is anywhere feasible (maybe that’s true for some “boilerplate” react UI code though). You have to have in mind, that you should still review all the code which takes a good chunk of the time (especially if it’s full with issues as it is with LLMs). Often I go over it and think yes, this is ok, and then I check it out in more detail and find a lot of issues that cost me more time compared to writing the code myself in the first place.

I have actually fed GPT4 a lot of natural language instructions to write code, and it was kind of a disaster, I have to try that again with more code instructions, as I think it’s better to just provide an LLM the code directly, if it will really get smart enough it will understand the intentions of the code without comments (as it has seen a lot of code).

Context size is also a bigger issue, the LLM just doesn’t have as much overview over the code and the relevant details (I need to try out the 32k GPT4 model though and feed it more code of the architecture, this may help, but is obviously a lot of work…)

Same for humans, if your code is really too complex, you can likely simplify it, such that humans can read it without comments. If not, it falls for me in the first category I’ve listed (complex math or similar). And then of course comments make sense for a complex piece of code that may need more context. I would only add comments otherwise for edgecases and ideas (e.g. TODO).

For the rest a good API doc (javadoc, rustdoc etc.) is more than enough (if it’s clear what a function should do and the function is written in a modular way, it should be easy to read the code IMHO.

Really if you need comments, think about the code first, is it the simplest approach? Can I make it more readable? I feel like I have written a lot of “unreadable” (or too complex) code in my junior years…

What otherwise makes sense for me is a high level description of the architecture.

permalink
report
parent
reply
1 point

How were you feeding it?

There’s a world of difference between using ChatGPT and something like Copilot within a mature codebase.

Once a few of the Copilot roadmap features are added, I suspect you’ll be seeing yet another leap forward.

Too many commenting on this subject focus in on where the tech is at today without appropriately considering the jump from where it was at a year ago versus today and what that means for next year or the year after.

permalink
report
parent
reply
2 points

I’m mostly using ChatGPT4, because I don’t use vscode (helix), and as far as I could see it from colleagues, the current Copilot(X) is not helpful at all…

I’m describing the problem (context etc.), maybe paste some code there, and hope that it gets what I mean, when it doesn’t (which seems to be rather often), I’ll try to help it with the context it hasn’t gotten, but it very often fails, unless the code stuff is rather simple (i.e. boilerplaty). But even if I want the GPT4 to generate a bunch of boilerplate, it introduces something like // repeat this 20 times in between the code that it should actually generate, and even if I tell it multiple times that it should generate the exact code, it fails pretty much all the time, also with increased context size via the API, so that it should actually be able to do it in one go, the gpt4-0314 model (via the API) seems to be a bit better here.

I’m absolutely interested where this leads, and I’m the first that monitors all the changes, but right now it slows me down, rather than really helping me. Copilot may be interesting in the future, but right now it’s dumb as fu… I’m not writing boilerplaty code, it’s rather complex stuff, and it fails catastrophically there, I don’t see that this will change in the near future. GPT4 got dumber over the course of the last half year, it was certainly better at the beginning. I can remember being rather impressed by it, but now meh…

It’s good for natural language stuff though, but not really for novel creative stuff in code (I’m doing most stuff in Rust btw.).

But GPT5 will be interesting. I doubt, that I’ll really profit from it for code related stuff (maybe GPT6 then or so), but we’ll see… All the other developments in that space are also quite interesting. So when it’s actually viable to train or constrain your own LLM on your own bigger codebase, such that it really gets the details, and gives actual helpful suggestions, (e.g. something like the recent CodeLlama release) this stuff may be more interesting for actual coding.

I’m not even letting it generate comments (e.g. above functions) because it’s kinda like this currently (figurative, more fancy but wordy, and not really helpful)

// this variable is of type int
let a = 8;
permalink
report
parent
reply

Programmer Humor

!programmer_humor@programming.dev

Create post

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

  • Keep content in english
  • No advertisements
  • Posts must be related to programming or programmer topics

Community stats

  • 7.2K

    Monthly active users

  • 955

    Posts

  • 37K

    Comments