You are viewing a single thread.
View all comments View context
-1 points
Removed by mod
permalink
report
parent
reply
8 points
*

You’re not just confident that asking chatGPT to explain it’s inner workings works exactly like a --verbose flag, you’re so sure that’s what happening that it apparently does not occur to you to explain why you think the output is not just more plausible text prediction based on its training weights with no particular insight into the chatGPT black box.

Is this confidence from an intimate knowledge of how LLMs work, or because the output you saw from doing this looks really really plausible? Try and give an explanation without projecting agency onto the LLM, as you did with “explain carefully why it rejects”

permalink
report
parent
reply

@earthquake You’re correct that projecting agency to the LLM is problematic, but in doing so, we get better quality results. I’ve argued that we need new words for LLMs instead of “think,” “understand,” “learn,” etc. We’re anthropomorphizing them and this makes people less critical and gradually shifts their attitudes in incorrect directions.

Unfortunately, I don’t think we’ll ever develop new words which more accurately reflect what is going on.

permalink
report
parent
reply
1 point

Seriously, what kind of reply is this, you ignore everything I said except the literal last thing, and even then it’s weasel words. “Using agential language for LLMs is wrong, but it works.”

Yes, Curtis, prompting the LLM with language more similar to its training data results in more plausible text prediction in the output, why is that? Because it’s more natural, there’s not a lot of training data on querying a program on its inner workings, so the response is less like natural language.

But you’re not actually getting any insight. You’re just improving the verisimilitude of the text prediction.

permalink
report
parent
reply
1 point

Got it, because the output you saw from doing this looks really really plausible. Disappointing, but what other answer could it have been?

Here’s a story for you: a scientist cannot get his papers published. In frustration, he complains to his co-worker, “I have detailed charts on the different type and amount of offerings to the idol, and the correlations to results on prayers answered. I think this is a really valuable contribution to understanding how to beseech the gods for intervention in our lives, this will help people! Why won’t they publish my work?”

His co-worker replies, “Certainly! As a large language model I can see how that would be a frustrating experience. Here are five common reasons that research papers are rejected for publication.”

permalink
report
parent
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 1.9K

    Monthly active users

  • 565

    Posts

  • 16K

    Comments

Community moderators