From his website stallman.org:
Richard Stallman has cancer. Fortunately it is slow-growing and manageable follicular lymphona, so he will probably live many more years nonetheless. But he now has to be even more careful not to catch Covid-19.
Recent video of him speaking at GNU 40 Hacker Meeting. Screenshots of video stream.
GPT, for example, fails in calculation with problems like knapsack, adjacency matrix, Huffman tree, etc.
it starts giving garbled output.
Ask it simple question a calculator can ask. Say square root of 48. It will give the wrong answer
The current LLMs can’t loop and can’t see individual digits, so their failure at seemingly simple math problems is not terrible surprising. For some problems it can help to rephrase the question in such a way that the LLM goes through the individual steps of the calculation, instead of telling you the result directly.
And more generally, LLMs aren’t exactly the best way to do math anyway. Human’s aren’t any good at it either, that’s why we invented calculators, which can do the same task with a lot less computing power and a lot more reliably. LLMs that can interact with external systems are already available behind paywall.
https://www.deepmind.com/blog/competitive-programming-with-alphacode
People overestimate how much it matters that ai “doesn’t have the capacity to understand it’s output”
Even if it doesn’t, is that a massive problem to overcome? There’s studies showing that if you have an ai list the potential problems with an output and then apply them to its own output it performs significantly better. Perhaps we’re just a recursive algorithm away from that.