The intelligence illusion seems to be based on the same mechanism as that of a psychic’s con, often called cold reading. It looks like an accidental automation of the same basic tactic.

I like this article a lot, and the model of LLMs as automaton Sylvia Brownes (rest in piss) is a good one. I don’t agree that the con is accidental, though. the billionaires who own this trash definitely know how to run a psychic con — it’s how they lied their way into being seen as visionaries. the big innovation in LLMs isn’t the fancy Markov chains, it’s in automating the set and setting necessary to prompt spiritual fervor in folks who are susceptible to it.

2 points

Not sure about the people who “are or think they are intelligent” being more susceptible to the con.

It feels like something one wishes to be true for karmic/poetic reasons rather than something that actually IS true.

I think good marks for the LLM con are more generally doubtful of the value of human intelligence/labour/education and/or tech positivist rather than true believers in their own intelligence.

permalink
report
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 2.2K

    Monthly active users

  • 558

    Posts

  • 16K

    Comments

Community moderators