cross-posted from: https://lemmy.ml/post/14869314

“I want to live forever in AI”

You are viewing a single thread.
View all comments View context
1 point
*

Isn’t the reward function in reinforcement learning something like a desire it has? I mean training works because we give it some function to minimize/maximize… A goal that it strives for?! Sure it’s a mathematical way of doing it and in no way as complex as the different and sometimes conflicting desires and goals I have as a human… But nonetheless I think I’d consider this as a desire and a reason to do something at all, or machine learning wouldn’t work in the first place.

permalink
report
parent
reply
2 points

The reward function for an LLM is about generating a next word that is reasonable. It’s like a road-building robot that’s rewarded for each millimeter of road built, but has no intention to connect cities or anything. It doesn’t understand what cities are. It doesn’t even understand what a road is. It just knows how to incrementally add another millimeter of gravel and asphalt that an outside observer would call a road.

If it happens to connect cities it’s because a lot of the roads it was trained on connect cities. But, if its training data also happens to contain a NASCAR oval, it might end up building a NASCAR oval instead of a road between cities.

permalink
report
parent
reply
1 point
*

That is an interesting analogy. In the real world it’s kinda similar. The construction workers also don’t have a “desire” (so to speak) to connect the cities. It’s just that their boss told them to do so. And it happens to be their job to build roads. Their desire is probably to get through the day and earn a decent living. And further along the chain, not even their boss nor the city engineer necessarily “wants” the road to go in a certain direction.

Talking about large language models instead of simpler forms of machine learning makes it a bit complicated. Since it’s and elaborate trick. Somehow making them want to predict the next token makes them learn a bit of maths and concepts about the world. The “intelligence”, the ability to anwer questions and do something alike “reasoning” emerges in the process.

I’m not that sure. Sure the weights of an ML model in itself don’t have any desire. They’re just numbers. But we have more than that. We give it a prompt, build chatbots and agents around the models. And these are more complex systems with the capability to do something. Like do (simple) customer support or answer questions. And in the end we incentivise them to do their job as we want, albeit in a crude and indirect way.

And maybe this is skipping half of the story and directly jumping to philosophy… But we as humans might be machines, too. And what we call desires is a result from simpler processes that drive us. For example surviving. And wanting to feel pleasure instead of pain. What we do on a daily basis kind of emerges from that and our reasoning capabilities.

It’s kind of difficult to argue. Because everything also happens within a context. The world around us shapes us and at the same time we’re part of bigger dynamics and also shape our world. And large language models or the whole chatbot/agent are pretty simplistic things. They can just do text and images. They don’t have conciousness or the ability to remember/learn/grow with every interaction, as we do. And they do simple, singular tasks (as of now) and aren’t completely embedded in a super complex world.

But I’d say that an LLM answers a question correctly (which it can do) and why it does it due to the way supervised learning works… And the road construction worker building the road towards the other city and how that relates to his basic instincts as a human… Are kind of similar concepts. They’re both results of simpler mechanisms that are also completely unrelated to the goal the whole entity is working towards. (I mean not directly related… I.e. needing money to pay for groceries and paving the road.)

I hope this makes some sense…

permalink
report
parent
reply
2 points

The construction workers also don’t have a “desire” (so to speak) to connect the cities. It’s just that their boss told them to do so.

But, the construction workers aren’t the ones who designed the road. They’re just building some small part of it. In the LLM case that might be like an editor who is supposed to go over the text to verify the punctuation is correct, but nothing else. But, the LLM is the author of the entire text. So, it’s not like a construction worker building some tiny section of a road, it’s like the civil engineer who designed the entire highway.

Somehow making them want to predict the next token makes them learn a bit of maths and concepts about the world

No, it doesn’t. They learn nothing. They’re simply able to generate text that looks like the text generated by people who do know math. They certainly don’t know any concepts. You can see that by how badly they fail when you ask them to do simple calculations. They quickly start generating text that looks like it contains fundamental mistakes, because they’re not actually doing math or anything, they’re just generating plausible next words.

The “intelligence”, the ability to anwer questions and do something alike “reasoning” emerges in the process.

No, there’s no intelligence, no reasoning. The can fool humans into thinking there’s intelligence there, but that’s like a scarecrow convincing a crow that there’s a human or human-like creature out in the field.

But we as humans might be machines, too

We are meat machines, but we’re meat machines that evolved to reproduce. That means a need / desire to get food, shelter, and eventually mate. Those drives hook up to the brain to enable long and short term planning to achieve those goals. We don’t generate language its own sake, but instead in pursuit of a goal. An LLM doesn’t have that. It merely generates plausible words. There’s no underlying drive. It’s more a scarecrow than a human.

permalink
report
parent
reply

Programmer Humor

!programmer_humor@programming.dev

Create post

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

  • Keep content in english
  • No advertisements
  • Posts must be related to programming or programmer topics

Community stats

  • 3.4K

    Monthly active users

  • 1K

    Posts

  • 38K

    Comments