For example if someone creates something new that is horrible for humans, how will AI understand that it is bad if it doesn’t have other horrible things to relate it with?

You are viewing a single thread.
View all comments View context
3 points

It’s not even making decisions. It’s following instructions.

Chat gpt’s instructions are very advanced, but the decisions have already been made. It follows the prompt and it’s reference material to provide the most common response.

It’s like a kid building a Lego kit- the kid isn’t deciding where pieces go, just following instructions.

Similarly, between the prompt, the training and the very careful instructions in how to train, and instructions that limit objectionable responses…. All it’s doing is following instructions already defined.

permalink
report
parent
reply

ELI5

!ELI5@kbin.social

Create post

Explain it to me like I am 5. Everybody should know what this is about.

Community stats

  • 1

    Monthly active users

  • 30

    Posts

  • 219

    Comments

Community moderators