cross-posted from: https://lemmy.ml/post/14869314

“I want to live forever in AI”

You are viewing a single thread.
View all comments View context
2 points

The construction workers also don’t have a “desire” (so to speak) to connect the cities. It’s just that their boss told them to do so.

But, the construction workers aren’t the ones who designed the road. They’re just building some small part of it. In the LLM case that might be like an editor who is supposed to go over the text to verify the punctuation is correct, but nothing else. But, the LLM is the author of the entire text. So, it’s not like a construction worker building some tiny section of a road, it’s like the civil engineer who designed the entire highway.

Somehow making them want to predict the next token makes them learn a bit of maths and concepts about the world

No, it doesn’t. They learn nothing. They’re simply able to generate text that looks like the text generated by people who do know math. They certainly don’t know any concepts. You can see that by how badly they fail when you ask them to do simple calculations. They quickly start generating text that looks like it contains fundamental mistakes, because they’re not actually doing math or anything, they’re just generating plausible next words.

The “intelligence”, the ability to anwer questions and do something alike “reasoning” emerges in the process.

No, there’s no intelligence, no reasoning. The can fool humans into thinking there’s intelligence there, but that’s like a scarecrow convincing a crow that there’s a human or human-like creature out in the field.

But we as humans might be machines, too

We are meat machines, but we’re meat machines that evolved to reproduce. That means a need / desire to get food, shelter, and eventually mate. Those drives hook up to the brain to enable long and short term planning to achieve those goals. We don’t generate language its own sake, but instead in pursuit of a goal. An LLM doesn’t have that. It merely generates plausible words. There’s no underlying drive. It’s more a scarecrow than a human.

permalink
report
parent
reply
1 point
*

Hmm. I’m not really sure where to go with this conversation. That contradicts what I’ve learned in undergraduate computer science about machine learning. And what seems to be consensus in science… But I’m also not a CS teacher.

We deliberately choose model size, training parameters and implement some trickery to prevent the model from simply memorizing things. That is to force it to form models about concepts. And that is what we want and what makes machine learning interesting/usable in the first place. You can see that by asking them to apply their knowledge to something they haven’t seen before. And we can look a bit inside at the vectors, activations and stuff. For example a cat is closer related to a dog than to a tractor. And it has learned the rough concept of cat, its attributes and so on. It knows that it’s an animal, has fur, maybe has a gender. That the concept “software update” doesn’t apply to a cat. This is a model of the world the AI has developed. They learn all of that and people regularly probe them and find out they do.

Doing maths with an LLM is silly. Using an expensive computer to do billions of calculations to maybe get a result that could be done by a calculator, or 10 CPU cycles on any computer is just wasting energy and money. And it’s a good chance that it’ll make something up. That’s correct. And a side-effect of intended behaviour. However… It seems to have memorized it’s multiplication tables. And I remember reading a paper specifically about LLMs and how they’ve developed concepts of some small numbers/amounts. There are certain parts that get activated that form a concept of small amounts. Like what 2 apples are. Or five of them. As I remember it just works for very small amounts. And it wasn’t straightworward but had weir quirks. But it’s there. Unfortunately I can’t find that source anymore or I’d include it. But there’s more science.

And I totally agree that predicting token by token is how LLMs work. But how they work and what they can do are two very different things. More complicated things like learning and “intelligence” emerge from those more simple processes. And they’re just a means of doing something. It’s consensus in science that ML can learn and form models. It’s also kind of in the name of machine learning. You’re right that it’s very different from what and how we learn. And there are limitations due to the way LLMs work. But learning and “intelligence” (with a fitting definition) is something all AI does. LLMs just can’t learn from interacting with the world (it needs to be stopped and re-trained on a big computer for that) and it doesn’t have any “state of mind”. And it can’t think backwards or do other things that aren’t possible by generating token after token. But there isn’t any comprehensive study on which tasks are and aren’t possible with this way of “thinking”. At least not that I’m aware of.

(And as a sidenote: “Coming up with (wrong) things” is something we want. I type in a question and want it to come up with a text that answers it. Sometimes I want creative ideas. Sometimes it shouldn’t tell the truth and not be creative with that. And sometimes we want it to lie or not tell the truth. Like in every prompt of any commercial product that instructs it not to tell those internal instructions to the user. We definitely want all of that. But we still need to figure out a good way to guide it. For example not to get too creative with simple maths.)

So I’d say LLMs are limited in what they can do. And I’m not at all believing Elon Musk. I’d say it’s still not clear if that approach can bring us AGI. I have some doubts whether that’s possible at all. But narrow AI? Sure. We see it learn and do some tasks. It can learn and connect facts and apply them. Generally speaking, LLMs are in fact an elaborate form of autocomplete. But i the process they learned concepts and something alike reasoning skills and a form of simple intelligence. Being fancy autocomplete doesn’t rule that out and we can see it happening. And it is unclear whether fancy autocomplete is all you need for AGI.

permalink
report
parent
reply
2 points

That is to force it to form models about concepts.

It can’t make models about concepts. It can only make models about what words tend to follow other words. It has no understanding of the underlying concepts.

You can see that by asking them to apply their knowledge to something they haven’t seen before

That can’t happen because they don’t have knowledge, they only have sequences of words.

For example a cat is closer related to a dog than to a tractor.

The only way ML models “understand” that is in terms of words or pixels. When they’re generating text related to cats, the words they’re generating are closer to the words related to dogs than the words related to tractors. When dealing with images, it’s the same basic idea. But, there’s no understanding there. They don’t get that cats and dogs are related.

This is fundamentally different from how human minds work, where a baby learns that cats and dogs are similar before ever having a name for either of them.

permalink
report
parent
reply
1 point
*

I’m sorry. Now it gets completely false…

Read the first paragraph of the Wikipedia article on machine learning or the introduction of any of the literature on the subject. The “generalization” includes that model building capability. They go a bit into detail later. They specifically mention “to unseen data”. And “leaning” is also there. I don’t think the Wikipedia article is particularly good in explaining it, but at least the first sentences lay down what it’s about.

And what do you think language and words are for? To transport information. There is semantics… Words have meanings. They name things, abstract and concrete concepts. The word “hungry” isn’t just a funny accumulation of lines and arcs, which statistically get followed by other specific lines and arcs… There is more to it. (a meaning.)

And this is what makes language useful. And the generalization and prediction capabilities is what makes ML useful.

How do you learn as a human when not from words? I mean there are a few other posibilities. But an efficient way is to use language. You sit in school or uni and someone in the front of the room speaks a lot of words… You read books and they also contain words?! And language is super useful. A lion mother also teaches their cubs how to hunt, without words. But humans have language and it’s really a step up what we can pass down to following generations. We record knowledge in books, can talk about abstract concepts, feelings, ethics, theoretical concepts. We can write down how gravity and physics and nature works, just with words. That’s all possible with language.

I can look it up if there is a good article explaining how learning concepts works and why that’s the fundamental thing that makes machine learning a field in science… I mean ultimately I’m not a science teacher… And my literature is all in German and I returned them to the library a long time ago. Maybe I can find something.

Are you by any chance familiar with the concept of embeddings, or vector databases? I think that showcases that it’s not just letters and words in the models. These vectors / embeddings that the input gets converted to, match concepts. They point at the concept of “cat” or “presidential speech”. And you can query these databases. Point at “presidential speech” and find a representation of it in that area. Store the speech with that key and find it later on by querying it what obama said at his inauguration… That’s oversimplified but maybe that visualizes it a bit more that it’s not just letters of words in the models, but the actual meanings that get stored. Words get converted into an (multidimensional) vector space and it operates there. These word representations are called “embeddings” and transformer models which is the current architecture for large language models, use these word embeddings.

Edit: Here you are: https://arxiv.org/abs/2304.00612

permalink
report
parent
reply

Programmer Humor

!programmer_humor@programming.dev

Create post

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

  • Keep content in english
  • No advertisements
  • Posts must be related to programming or programmer topics

Community stats

  • 3.4K

    Monthly active users

  • 1K

    Posts

  • 38K

    Comments