“The chatbot gave wildly different answers to the same math problem, with one version of ChatGPT even refusing to show how it came to its conclusion.”
It’s getting worse. And because it’s a black box model they don’t know why. The computer science professor here likens it to how human students make mistakes… but human students make mistakes because they don’t have perfect recall, mishear things being told to them, are tired and/or not paying attention… A bunch of reason that basically relate to having a human body that needs food, rest and water. A thing a computer does not have.
The only reason ChatGPT should be getting math wrong is that it’s getting inputs that are wrong, but without view into it they can’t figure out where it’s getting it wrong and who told it the wrong info.
I’ll just leave this here: How to make AGI reliably safe by the end of 2023
I am writing a screenplay about a ship AI and problems I imagined and sleep is one of them. Just because it’s technology doesn’t mean it doesn’t need rest.
problems I imagined and sleep is one of them. Just because it’s technology doesn’t mean it doesn’t need rest.
That actually is what it means.
I dunno, one of the very first tests I gave ChatGPT was one of those order of operations equations, and this was in like week 2. It gave me wrong answers every time, even when I explained the correct one. It was very polite about its mistakes, but this has always been here.
You think that’s bad? My calculator can’t even finish a simple sentence.
A single word can be a full sentence, unless answers to either/or questions are not sentences.
Or is this one of those logic things where a train is only a train when the railway engine is connected to something?