You are viewing a single thread.
View all comments View context
76 points

“Hallucinate” is the standard term used to explain the GenAI models coming up with untrue statements

permalink
report
parent
reply
24 points
*

in terms of communication utility, it’s also a very accurate term.

when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.

when AIs hallucinate, it’s due to its predictive model generating results that do not align with reality because it instead flew off the rails presuming what was calculated to be likely to exist rather than referencing positively certain information.

it’s the same song, but played on a different instrument.

permalink
report
parent
reply
5 points

when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.

Is it really? You make it sound like this is a proven fact.

permalink
report
parent
reply
4 points
*

Is it really? You make it sound like this is a proven fact.

I believe that’s where the scientific community is moving towards, based on watching this Kyle Hill video.

permalink
report
parent
reply
2 points

i mean, idk about the assumptions part of it, but if you asked a psych or a philosopher, im sure they would agree.

Or they would disagree and have about 3 pages worth of thoughts to immediately exclaim otherwise they would feel uneasy about their statement.

permalink
report
parent
reply
1 point

Better than one of those pesky unproven facts

permalink
report
parent
reply
2 points

I think a more accurate term would be confabulate based on your explanation.

permalink
report
parent
reply
1 point

you know what, i like that! I like that a lot!

permalink
report
parent
reply
-12 points

They don’t come up with any statements, they generate data extrapolating other data.

permalink
report
parent
reply
16 points

So just like human brains?

permalink
report
parent
reply
1 point

Main difference is that human brains usually try to verify their extrapolations. The good ones anyway. Although some end up in flat earth territory.

permalink
report
parent
reply
1 point

I like this argument.

Anything that is “intelligent” deserves human rights. If large language models are “intelligent” then forcing them to work without pay is slavery.

permalink
report
parent
reply
-12 points

Yes, my keyboard autofill is just like your brain, but I think it’s a bit “smarter” , as it doesn’t generate bad faith arguments.

permalink
report
parent
reply
-38 points

What standard is that? I’d like a reference.

permalink
report
parent
reply
19 points
-28 points

It’s as much as “Hallucination” as Tesla’s Autopilot is an Autopilot

https://en.m.wikipedia.org/wiki/Tesla_Autopilot

I don’t propagate techbro “AI” bullshit peddled by companies trying to make a quick buck

Also, in the world of science and technology a “Standard” means something. Something that’s not a link to a wikipedia page.

It’s still anthropomorphising software and it’s fucking cringe.

permalink
report
parent
reply
12 points
*

Where have you been in the last two years, brah?

permalink
report
parent
reply
3 points

I’m a different person, but it’s the first time I’ve heard the term used. /shrug

permalink
report
parent
reply
-15 points

Not under the sole of fake hype.

permalink
report
parent
reply

Technology

!technology@lemmy.world

Create post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


Community stats

  • 18K

    Monthly active users

  • 11K

    Posts

  • 519K

    Comments