You are viewing a single thread.
View all comments View context
8 points

I think the key problem with LLMs is that they have no grounding in physical reality. They’re just trained a whole bunch of text data, and the topology of the network ends up being moulded to represent the patterns in that data. I suspect that what’s really needed is to train models on interactions with the physical world first, to create an internal representation of how it works, the same way children do. Once it develops an intuition for how the world works, then it could be taught language in that context.

permalink
report
parent
reply

Science

!science@lemmy.ml

Create post

Subscribe to see new publications and popular science coverage of current research on your homepage


Community stats

  • 747

    Monthly active users

  • 987

    Posts

  • 3.2K

    Comments

Community moderators