Small rant : Basically, the title. Instead of answering every question, if it instead said it doesn’t know the answer, it would have been trustworthy.

You are viewing a single thread.
View all comments View context
1 point

The problem is that they are prone to making up why they are correct too.

There’s various techniques to try and identify and correct hallucinations, but they all increase the cost and none are a silver bullet.

But the rate at which it occurs decreased with the jump in pretrained models, and will likely decrease further with the next jump too.

permalink
report
parent
reply

ChatGPT

!chatgpt@lemmy.world

Create post

Unofficial ChatGPT community to discuss anything ChatGPT

Community stats

  • 442

    Monthly active users

  • 270

    Posts

  • 2.2K

    Comments

Community moderators