153 points

Well, yeah. People are acting like language models are full fledged AI instead of just a parrot repeating stuff said online.

permalink
report
reply
77 points

Spicy auto complete is a useful tool.

But these things are nothing more

permalink
report
parent
reply
12 points

The paper actually argues otherwise, though it’s not fully settled on that conclusion, either.

permalink
report
parent
reply
-41 points

Whenever any advance is made in AI, AI critics redefine AI so its not achieved yet according to their definition. Deep Blue Chess was an AI, an artificial intelligence. If you mean human or beyond level general intelligence, you’re probably talking about AGI or ASI (general or super intelligence, respectively).

And the second comment about LLMs being parrots arises from a misunderstanding of how LLMs work. The early chatbots were actual parrots, saying prewritten sentences that they had either been preprogrammed with or got from their users. LLMs work differently, statistically predicting the next token (roughly equivalent to a word) based on all those that came before it, and parameters finetuned during training. Their temperature can be changed to give more or less predictable output, and as such, they have the potential for actually original output, unlike their parrot predecessors.

permalink
report
parent
reply
87 points

You completely missed the point. The point is people have been lead to believe LLM can do jobs that humans do because the output of LLMs sounds like the jobs people do, when in reality, speech is just one small part of these jobs. It turns, reasoning is a big part of these jobs, and LLMs simply don’t reason.

permalink
report
parent
reply
58 points
*

Whenever any advance is made in AI, AI critics redefine AI so its not achieved yet according to their definition.

That stems from the fact that AI is an ill-defined term that has no actual meaning. Before Google maps became popular, any route finding algorithm utilizing A* was considered “AI”.

And the second comment about LLMs being parrots arises from a misunderstanding of how LLMs work.

Bullshit. These people know exactly how LLMs work.

LLMs reproduce the form of language without any meaning being transmitted. That’s called parroting.

permalink
report
parent
reply
-18 points
*
Deleted by creator
permalink
report
parent
reply
-20 points
*

LLMs reproduce the form of language without any meaning being transmitted. That’s called parroting.

Even if (and that’s a big if) an AGI is going to be achieved at some point, there will be people calling it parroting by that definition. That’s the Chinese room argument.

permalink
report
parent
reply
33 points

LLMs work differently, statistically predicting the next token (roughly equivalent to a word) based on all those that came before it, and parameters finetuned during training.

Which is what a parrot does.

permalink
report
parent
reply
22 points

Yeah this is the exact criticism. They recombine language pieces without really doing language. The end result looks like language, but it lacks any of the important characteristics of language such as meaning and intention.

If I say “Two plus two is four” I am communicating my belief about mathematics.

If an llm emits “two plus two is four” it is outputting a stochastically selected series of tokens linked by probabilities derived from training data. If the statement is true or false then that is accidental.

Hence, stochastic parrot.

permalink
report
parent
reply
4 points

This is parrot libel

permalink
report
parent
reply
-12 points

You take in some information, combine that with some precious experiences, and then output words

Which is what an LLM does.

permalink
report
parent
reply
16 points

AI hasn’t been redefined. For people familiar with the field it has always been a broad term meaning code that learns (and subdivided in many types of AI), and for people unfamiliar with the field it has always been a term synonymous with AGI. So when people in the former category put out a product and label it as AI, people in the latter category then run with it using their own definition.

For a long time ML had been the popular buzzword in tech and people outside the field didn’t care about it. But then Google and OpenAI started calling ML and LLMs simply “AI” and that became the popular buzzword. And when everyone is talking about AI, and most people conflate that with AGI, the results are funny and scary at the same time.

permalink
report
parent
reply
7 points

and for people unfamiliar with the field it has always been a term synonymous with AGI.

Gamers screaming about the AI of bots/NPCs making them mad beg to differ

permalink
report
parent
reply
5 points

LLMs have more in common with chatbots than AI.

permalink
report
parent
reply
-4 points

You are very skilled in the art of missing the point. LLMs can absolutely be used as chatbots, amongst other things. They are more advanced than their predecessors in this, and work in a different way. That does not stop them from being a form of artificial intelligence. Chatbots and AI are not mutually exclusive terms, the first is a subset of the second. And you may indeed be referring to AGI or ASI as AI, a misconception I addressed in my earlier comment.

permalink
report
parent
reply
5 points

I appreciate you taking the time to clarify thank you!

permalink
report
parent
reply
77 points

https://link.springer.com/article/10.1007/s10676-024-09775-5

Link to the article if anyone wants it

permalink
report
reply
62 points

Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005)

Now I kinda want to read On Bullshit

permalink
report
parent
reply
-14 points

Don’t waste your time. It’s honestly fucking awful. Reading it was like experiencing someone mentally masturbating in real time.

permalink
report
parent
reply
27 points

Yep. You’re smarter than everyone who found it insightful.

permalink
report
parent
reply
23 points

That’s actually a fun read

permalink
report
parent
reply
57 points
4 points

fucking love that article. sums up everything wrong with AI. Unfortunately, it doesn’t touch on what AI does right: help idiots like me achieve a slight amount of competence on subjects that such people can’t be bothered with dedicating their entire lives to.

permalink
report
parent
reply
44 points
*

Suddenly it dawned on me that I can plaster my CV with AI and win over actual competent people easy peasy

What were you doing between 2020 and 23? I was working on my AI skillset. Nobody will even question me because they fucking have no idea what it is themselves but only that they want it.

permalink
report
reply
14 points

As an engineering manager, I’ve already seen cover letters and intro emails that are so obviously AI generated that it’s laughable. These should be used like you use them for writing essays, as a framework with general prompts, but filled in by yourself.

Fake friendliness that was outsourced to an ai is worse than no friendliness at all.

permalink
report
parent
reply
2 points
*

I didn’t mean AI generated anything though 🙄. I meant put lots of ‘AI’ keyword in the resume in whatever way looks professional but in reality is pure bullshit

Watch their neuron being activated as they see magic word. Gotta play the marketing game.

You want to be AI ready? Hire me. I have spent three years working with AI and posses invaluable experience that will elevate your company into a new era of rapid development.

permalink
report
parent
reply
1 point

It feels like you didn’t quite understand… If you’ve ever read an AI essay, you can see some of the way they currently write. When you see facts and figures thrown in from the internet in terms of what the company does and they sound… Artificial… It’s rather obvious that it was AI written. I’m currently getting AI spam and it’s also quite easy to see and detect. It’s the same thing.

Someone used an AI tool to write a cover letter and sent it to me. I’ve seen this a few times. It seems very obvious when you come across it.

I’m sure it’ll get better in the future, but right now it needs massaging in order to sound real. There’s a very obvious uncanny valley that exists with some AI writing. That’s what I’m talking about.

permalink
report
parent
reply
4 points

It’s extremely easy to detect this. Recruiters actively filter out resumes like this for important roles.

permalink
report
parent
reply
39 points

Plot-twist: The paper was authored by a competing LLM.

permalink
report
reply

Science Memes

!science_memes@mander.xyz

Create post

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don’t throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.


Sister Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

Community stats

  • 12K

    Monthly active users

  • 2.2K

    Posts

  • 52K

    Comments