KICK TECH BROS OUT OF 196

You are viewing a single thread.
View all comments View context
1 point

What about this? These weird little dictionaries have lots of emergent properties we’re still exploring.

permalink
report
parent
reply
2 points

The paper states that the graphs representing those relations are the result of training LLMs on a very small subset of unambiguous true and false statements.

While these emergent properties may provide interesting avenues to model refinement and inspecting outputs it doesn’t change the fact that these weird little dictionaries aren’t doing anything truly unexpected. We just are learning the extra data associated with the training data.

It’s not far removed from the primary complaint of Gebru’s On Stochastic Parrots where she points out the ways that our biases are implicitly trained into LLMs because of the uncontrolled and unexamined inputs: except in this case those biases are the linguistics of truth and lies in unambiguous boolean inputs.

permalink
report
parent
reply
1 point

This may provide interesting avenues to model refinement that aren’t spitting things out and being retrained by “consciousness” telling it yes or no, or feeding it additional info.

permalink
report
parent
reply
1 point

Only if the “direction of truth” exists in the wild with unchecked training data.

That clustering is a representation of the nature of the data fed to the model: all their training data was unambitious true or false… It’s not surprising that it clusters.

permalink
report
parent
reply