I’ve been saying this for about a year since seeing the Othello GPT research, but it’s nice to see more minds changing as the research builds up.

Edit: Because people aren’t actually reading and just commenting based on the headline, a relevant part of the article:

New research may have intimations of an answer. A theory developed by Sanjeev Arora of Princeton University and Anirudh Goyal, a research scientist at Google DeepMind, suggests that the largest of today’s LLMs are not stochastic parrots. The authors argue that as these models get bigger and are trained on more data, they improve on individual language-related abilities and also develop new ones by combining skills in a manner that hints at understanding — combinations that were unlikely to exist in the training data.

This theoretical approach, which provides a mathematically provable argument for how and why an LLM can develop so many abilities, has convinced experts like Hinton, and others. And when Arora and his team tested some of its predictions, they found that these models behaved almost exactly as expected. From all accounts, they’ve made a strong case that the largest LLMs are not just parroting what they’ve seen before.

“[They] cannot be just mimicking what has been seen in the training data,” said Sébastien Bubeck, a mathematician and computer scientist at Microsoft Research who was not part of the work. “That’s the basic insight.”

51 points

Is there a difference between being a “stochastic parrot” and understanding text? No matter what you call it, an LLM will always produces the same output with the same input if it is at the same state.

An LLM will never say “I don’t know” unless it’s been trained to say “I don’t know”, it doesn’t have the concept of understanding. And so I lean on calling it a “stochastic parrot”. Although I think there is some interesting philosophic exercises, you could do on whether humans are much different and if understanding is just an illusion.

permalink
report
reply
8 points

No matter what you call it, an LLM will always produces the same output with the same input if it is at the same state.

How do you know a human wouldn’t do the same? We lack the ability to perform the experiment.

An LLM will never say “I don’t know” unless it’s been trained to say “I don’t know”

Also a very human behaviour, in my experience.

permalink
report
parent
reply
7 points

How do you know a human wouldn’t do the same? We lack the ability to perform the experiment.

I agree with you, I think its an interesting philosophical debate on whether we truly have free will, if we really have a level of understanding beyond LLMs do or if we are just a greatly more complex, biological version of an LLM. Like you said, we lack the ability to perform the experiment so I have to assume that our reactions are novel and spontaneous.

permalink
report
parent
reply
2 points

Fun thought experiment:

Let’s say we have a time machine and we can go back in time to a specific moment to observe how someone reacts to something.

If that person reacts the same way every time, does that mean that by knowing what they would do, you have removed their free will?

permalink
report
parent
reply
1 point

How do you know a human wouldn’t do the same?

Because the human has “circuits” for coherrent thought and language was added later.

permalink
report
parent
reply
2 points

No matter what you call it, an LLM will always produces the same output with the same input if it is at the same state.

You might want to look up the definition of ‘stochastic.’

permalink
report
parent
reply
6 points

They’re not wrong. Randomness in computing is what we call “pseudo-random” in that it is deterministic provided that you start from same state or “seed”.

permalink
report
parent
reply
1 point

That is the quote from the article, not my words. Stochastic parrot is an oxymoron.

permalink
report
parent
reply
1 point

What’s a quote from the article? The term stochastic parrot? It opens on saying that might be an inaccurate description.

permalink
report
parent
reply
29 points
*

The definition of understanding they use is very shallow compared to how most would define it. Failure to complete a task consistently when numbers are changed, even when they don’t effect the answer shows a lack of real understanding to most. Asking a model the sheet drying question for example will give different results depending on what numbers you use. Better models are better at generalizing but are still far from demonstrating what most consider to be real understanding.

permalink
report
reply
2 points

at some point don’t we just load it with every angle we (as humans) have and from there it can derrive not only our best answer but the better one as well? I mean isn’t that when even the shitty version of this ursups is?

permalink
report
parent
reply
4 points

A language model can’t determine good from bad because it’s only trained to predict the next token based on what it has seen.

permalink
report
parent
reply
25 points

Ask them to explain why a pun is funny or the solution to a riddle involving wordplay (cryptic crossword clues are particularly good for this) and you will quickly see they don’t actually understand words.

permalink
report
reply
8 points

I just did and it seems to understand.

Prompt: Can you explain why “I knew a woman who owned a taser. Man, was she stunning!” Is funny?

ChatGPT 3.5: The humor in this statement arises from a play on words. The word “stunning” is typically used to describe someone’s appearance, suggesting they are exceptionally attractive. However, in this context, it’s used in a different way, implying that the woman who owned a taser was stunning in the sense of being capable of stunning or shocking others. The unexpected twist creates a humorous juxtaposition between the two meanings of “stunning.”

permalink
report
parent
reply
7 points

I did some playing around with ChatGPT’s understanding of jokes a while back and I found that it actually did best on understanding puns, which IMO isn’t surprising since it’s a large language model and puns are deeply rooted in language and wordplay. It didn’t so so well at jokes based on other things but it still sometimes managed to figure them out too.

I remember discussing the subject in a Reddit thread and there was a commenter who was super enthused by the notion of an AI that understood humour because he himself was autistic and never “got” any jokes. He wanted an AI companion that would let him at least know when a joke was being said, so he wouldn’t get confused and flustered. I had to warn him that ChatGPT wasn’t reliable for that yet, but still, it did better than he did and he was fully human.

permalink
report
parent
reply
3 points

The key word here is “seems”.

permalink
report
parent
reply
1 point
*

Yeah, riddles work better than puns for what I’m talking about since most popular puns were probably in the training dataset.

Like I said, I’ve had best results (or worst) using cryptic crossword clues, since their solutions are almost definitely not in the training set. So it actually has to “think for itself” and you can see just how stupid it really is when it doesn’t have some existing explanation buried somewhere in its training set.

permalink
report
parent
reply
1 point

Use 4, not 3.5. The difference between the two is massive for nuances.

permalink
report
parent
reply
3 points

3.5 is the only free version. I won’t pay a subscription for a chatbot.

permalink
report
parent
reply
5 points

A child under a certain age usually can’t explain advanced concepts either, so the inability to understand one concept doesn’t preclude understanding of others.

permalink
report
parent
reply
1 point

Literally the most cited scientist in machine learning (quoted in the article above) quit his job at Google and went public warning of how quickly the tech was advancing because a model was able to explain why a joke was funny which he had previously thought wouldn’t be possible.

permalink
report
parent
reply
-3 points
*

One joke is a fluke, especially if the joke is out in the public discourse and appeared in some form in the training set. Call me when it can explain any novel joke written by a human where no explanation of that joke exists anywhere in the training data.

permalink
report
parent
reply
4 points

Ok, give me a sample of what you think it will get wrong, and let’s see.

permalink
report
parent
reply
0 points

I like asking Bing Chat to explain memes that I upload to it. If there’s a joke to be had in them, it always gets it.

permalink
report
parent
reply
4 points

Absolutely no way the training set could have included knowyourmeme.com.

permalink
report
parent
reply
1 point
*

I’ve fed it meme I’ve made? It still gets them.

permalink
report
parent
reply
19 points

I find this extraordinarily unconvincing. Firstly it’s based on the idea that random graphs are a great model for LLMs because they share a single superficial similarity. That’s not science, that’s poetry. Secondly, the researchers completely misunderstand how LLMs work. The assertion that a sentence could not have appeared in the training set does not prove anything. That’s expected behaviour. “stochastic parrot” wasn’t supposed to mean that it only regurgitates text that it’s already seen, rather that the text is a statistically plausible response to the input text based on very high dimensional feature vectors. Those features definitely could relate to what we think of as meaning or concepts, but they’re meaning or concepts that were inherent in the training material.

permalink
report
reply
12 points

Stupid, LLMs do not create new relationships to words that don’t exist.

This is all just fluff to make them seem more like AGI, which they never will be.

permalink
report
reply
5 points

Why would that be required for understanding? Presumably during the training it would have made connections between words it saw. Now that the training has stopped it hasn’t just lost those connections, sure it can’t make new connections but why is that important for using the connections it already has?

permalink
report
parent
reply
1 point

Not sure I understand your question, the article specifically mentions the training LLM making connections that were not in the training data, which is a human perspective, LLMs are just math.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 17K

    Monthly active users

  • 10K

    Posts

  • 466K

    Comments