ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.
1 point
Or they could feed the current model with a reputable source of medical information.
2 points
That wouldn’t guarantee correct answers.
It’s arguably more dangerous if ChatGPT gives mostly sane specific medical advice because it makes people put more trust in it than they should.
1 point