Source

I see Google’s deal with Reddit is going just great…

You are viewing a single thread.
View all comments View context
-6 points

We need to teach the AI critical thinking. Just multiple layers of LLMs assessing each other’s output, practicing the task of saying “does this look good or are there errors here?”

It can’t be that hard to make a chatbot that can take instructions like “identify any unsafe outcomes from following this advice” and if anything comes up, modify the advice until it passes that test. Have like ten LLMs each, in parallel, ask each thing. Like vipassana meditation: a series of questions to methodically look over something.

permalink
report
parent
reply
21 points

It can’t be that hard

woo boy

permalink
report
parent
reply
15 points

i can’t tell if this is a joke suggestion, so i will very briefly treat it as a serious one:

getting the machine to do critical thinking will require it to be able to think first. you can’t squeeze orange juice from a rock. putting word prediction engines side by side, on top of each other, or ass-to-mouth in some sort of token centipede, isn’t going to magically emerge the ability to determine which statements are reasonable and/or true

and if i get five contradictory answers from five LLMs on how to cure my COVID, and i decide to ignore the one telling me to inject bleach into my lungs, that’s me using my regular old intelligence to filter bad information, the same way i do when i research questions on the internet the old-fashioned way. the machine didn’t get smarter, i just have more bullshit to mentally toss out

permalink
report
parent
reply
1 point

Yeah I never assumed it would be magic. Instead I’m basing it on my own observations that an LLM can be asked whether there are errors in a piece of text, and it can identify them correctly.

Also, why would my comment be a joke?

permalink
report
parent
reply
-3 points

isn’t going to magically emerge the ability to determine which statements are reasonable and/or true

You’re assuming P!=NP

permalink
report
parent
reply
7 points

i prefer P=N!S, actually

permalink
report
parent
reply
5 points

you can assume anything you want with the proper logical foundations

permalink
report
parent
reply
10 points

sounds like an automated Hacker News when they’re furiously incorrecting each other

permalink
report
parent
reply
5 points
Deleted by creator
permalink
report
parent
reply
1 point

The fact that Generative Adversarial Networks exists means it isn’t that hard.

By hard I mean hard like hard math problem, not hard like mopping the floor after a long shift is hard. When I say “not too hard” I mean “possible”.

And you’re right. It is possible.

What I described isn’t a GAN per se though. It’s based on a similar idea, but it’s not the same thing.

permalink
report
parent
reply
0 points
Removed by mod
permalink
report
parent
reply
5 points

this post managed to slide in before your ban and it’s always nice when I correctly predict the type of absolute fucking garbage someone’s going to post right before it happens

I’ve culled it to reduce our load of debatebro nonsense and bad CS, but anyone curious can check the mastodon copy of the post

permalink
report
parent
reply

TechTakes

!techtakes@awful.systems

Create post

Big brain tech dude got yet another clueless take over at HackerNews etc? Here’s the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

Community stats

  • 2.2K

    Monthly active users

  • 558

    Posts

  • 16K

    Comments

Community moderators