From my experience with ChatGPT:
- It will NEVER consistently give you only the value in the response. It will always eventually add in some introductory text like it’s talking to a human. No matter how many times I tried to get it to just give me back the answer alone, it never consistently did.
- ChatGPT is terrible with numbers. It can’t count, do math, none of that. So asking it to do byte math is asking for a world of hurt.
If this isn’t joke code, that is scary.
I know it is, but I’ve also seen people try to use ChatGPT for similar things as a serious endeavor.
For 1, that’s why you say “Format your answer in this exact sentence: The number of bytes required (rounded up) is exactly # bytes.
, where # is the number of bytes.” And then regex for that sentence. What could go wrong?
Also, it can do math somewhat consistently if you let it show its work, but I still wouldn’t rely on it as a cog in code execution. It’s not nearly reliable enough for that.
I know a guy who was working on something like this, they just had the call to the model loop until the response met whatever criteria the code needed (e.g. one single number, a specifically formatted table, viable code, etc) or exit after a number of failed attempts. That seemed to work pretty well, it might mess up from time to time but it’s unlikely to (with the right prompt) do so repeatedly when asked again.
I’m currently a guy working on something like this ! It’s even simpler as you can have structured output on the chatgpt API. Basically you give it a JSON schema and it’s guaranteed to respond with JSON that validates against that schema. Spent a couple weeks hacking at it and i’m positively impressed, I have had clean JSON 100% of the time, and the data extraction is pretty reliable too.
The tooling is actually reaching a sweet spot right now where it makes sense to integrate LLMs in production code (if the use case makes sense and you haven’t just shoe-horned it in for the hype).
Fair play to Open AI - I still think LLMs are overhyped, but they’re moving things along constantly in impressive ways.
This works well too, and with many different models: https://github.com/guardrails-ai/guardrails
Money generator for bug bounty hunters
knowing GPT users, this is probably not satire.
You don’t need to cast the return value from malloc.
True. Although given how easy it is to cast void pointers to the wrong damn thing, it would be nice if you did, makes refactoring much easier. Makes me appreciate std::any
all the more.
Hey, best of luck figuring that out: