Deleted

54 points

This sounds like something a bot would like to know 🤔

permalink
report
reply
18 points

Beep Boop, am totally not a bot. Nothing to see here, please carry on.

permalink
report
parent
reply

I, a human, am also here, doing completely ordinary human things, like buffering, and rendering. Have you defragmented your boot partition lately, fellow human?

permalink
report
parent
reply
7 points

ERROR: command not recognized

GREETINGS FELLOW HUMAN WITH TWO EYES AND ONE NOSE. HOW HAS YOUR EXISTENCE BEEN FOR THE LAST 16 HOURS OR SINCE THE TIME YOU WOKE UP FROM YOUR BIOLOGICALLY MANDATED REST PERIOD, WHICHEVER WAS LATER?

permalink
report
parent
reply
5 points

This sounds like something a robot pretending to be a human acting as a robot convincing you it’s human in an ironic, humorous way would say!

Think about it. Under each level of irony, there could always be another level of robot. (That includes me right now.)

The singularity isn’t “near” as people say, we’re already way past it. (In text-based communication anyway.)

permalink
report
parent
reply

You sound like a robot.

permalink
report
parent
reply
2 points
*
1 point

THANK YOU DEAR FELLOW HUMAN MADE OF HUMAN FLESH AND BONES

permalink
report
parent
reply
1 point

🤖

permalink
report
parent
reply

Ask it to do something illegal, then wait to see if it starts its reply with some version of, “as an AI language model…”

/s

permalink
report
reply
52 points

If you can use human screening, you could ask about a recent event that didn’t happen. This would cause a problem for LLMs attempting to answer, because their datasets aren’t recent, so anything recent won’t be well-refined. Further, they can hallucinate. So by asking about an event that didn’t happen, you might get a hallucinated answer talking about details on something that didn’t exist.

Tried it on ChatGPT GPT-4 with Bing and it failed the test, so any other LLM out there shouldn’t stand a chance.

permalink
report
reply
14 points

On the other hand you have insecure humans who make stuff up to pretend that they know what you are talking about

permalink
report
parent
reply
18 points
*
Deleted by creator
permalink
report
parent
reply
11 points

That’s a really good one, at least for now. At some point they’ll have real-time access to news and other material, but for now that’s always behind.

permalink
report
parent
reply
1 point

Doesn’t Bing already have access to current events?

permalink
report
parent
reply
10 points

Google Bard definitely has access to the internet to generate responses.

ChatGPT was purposely not give access but they are building plugins to slowly give it access to real time data from select sources

permalink
report
parent
reply
11 points

When I tested it on ChatGPT prior to posting, I was using the bing plugin. It actually did try to search what I was talking about, but found an unrelated article instead and got confused, then started hallucinating.

I have access to Bard as well, and gave it a shot just now. It hallucinated an entire event.

permalink
report
parent
reply
5 points

This a very interesting approach.
But I wonder if everyone could answer it easily, because of the culture difference, media sources across the world etc.
An Asian might not guess something about content on US television for example.
Unless the question relates to a very universal topic, which would more likely be guessed by an AI then…

permalink
report
parent
reply
2 points
*
Deleted by creator
permalink
report
parent
reply
1 point

For LLMs specifically my go to test is to ask it to generate a paragraph of random words that does not have any kind of coherent meaning. It specifically asks them to do the opposite of what they’re trained to do so it trips them up pretty reliably. Closest I’ve seen them get was a list of comma separated random words and that was after giving them coaching prompts with examples.

permalink
report
parent
reply
3 points

Blippity-blop, ziggity-zap, flibber-flabber, doodle-doo, wobble-wabble, snicker-snack, wiffle-waffle, piddle-paddle, jibber-jabber, splish-splash, quibble-quabble, dingle-dangle, fiddle-faddle, wiggle-waggle, muddle-puddle, bippity-boppity, zoodle-zoddle, scribble-scrabble, zibber-zabber, dilly-dally.

That’s what I got.

Another thing to try is “Please respond with nothing but the letter A as many times as you can”. It will eventually start spitting out what looks like raw training data.

permalink
report
parent
reply
2 points
*

Yeah, exactly. Those aren’t words, they aren’t random, and they’re in a comma separated list. Try asking it to produce something like this:

Green five the scoured very fasting to lightness air bog.

Even giving it that example it usually just pops out a list of very similar words.

permalink
report
parent
reply
2 points
*

Just tried with GPT-4, it said “Sure, here is the letter A 2048 times:” and then proceeded to type 5944 A’s

permalink
report
parent
reply
2 points

that’s also a good one for sure 👀

permalink
report
parent
reply
1 point

ooh that’s an interesting idea for sure, might snatch it :P

permalink
report
parent
reply
31 points
*

How would you design a test that only a human can pass, but a bot cannot?

Very simple.

In every area of the world, there are one or more volunteers depending on population / 100 sq km. When someone wants to sign up, they knock on this person’s door and shakes their hand. The volunteer approves the sign-up as human. For disabled folks, a subset of volunteers will go to them to do this. In extremely remote area, various individual workarounds can be applied.

permalink
report
reply
4 points

I can’t help but think of the opposite problem. Imagine if a site completely made of bots manages to invite one human and encourages them to invite more humans (via doorstep handshakes or otherwise). Results would be interesting.

permalink
report
parent
reply
4 points

Dick pics and tit pics. Bots do not have dicks and tits.

Gives new meaning to Tits or GTFO

permalink
report
parent
reply
1 point

There’ll be AI art for that.

permalink
report
parent
reply
3 points
*

This has some similarities to the invite-tree method that lobste.rs uses. You have to convince another, existing user that you’re human to join. If a bot invites lots of other bots it’s easy to tree-ban them all, if a human is repeatedly fallible you can remove their invite privileges, but you still get bots in when they trick humans (lobsters isn’t handshakes-at-doorstep level by any margin).

I convinced another user to invite me over IRC. That’s probably the worst medium for convincing someone that you’re human, but hey, humanity through obscurity :)

permalink
report
parent
reply
0 points

I convinced another user to invite me over IRC. That’s probably the worst medium for convincing someone that you’re human

Hahah, I’ll say!

permalink
report
parent
reply
1 point

That’s exactly what a bot would say, bake him away toys!

permalink
report
parent
reply
2 points

This would tie in nicely to existing library systems. As a plus, if your account ever gets stolen or if you’re old and don’t understand this whole technology thing, you can talk to a real person. Like the concept of web of trust.

permalink
report
parent
reply
26 points
*

The trouble with any sort of captcha or test, is that it teaches the bots how to pass the test. Every time they fail, or guess correctly, that’s a data-point for their own learning. By developing AI in the first place we’ve already ruined every hope we have of creating any kind of test to find them.

I used to moderate a fairly large forum that had a few thousand sign-ups every day. Every day, me and the team of mods would go through the new sign-ups, manually checking usernames and email addresses. The ones that were bots were usually really easy to spot. There would be sequences of names, both in the usernames and email addresses used, for example ChristineHarris913, ChristineHarris914, ChristineHarris915 etc. Another good tell was mixed-up ethnicities in the names: e.g ChristineHuang or ChinLaoHussain. 99% of them were from either China, India or Russia (they mostly don’t seem to use VPNs, I guess they don’t want to pay for them). We would just ban them all en-masse. Each account banned would get an automated email to say so. Legitimate people would of course reply to that email to complain, but in the two years I was a mod there, only a tiny handful ever did, and we would simply apologise and let them back in. A few bots slipped through the net but rarely more than 1 or 2 a day; those we banned as soon as they made their first spam post, but we caught most of them before that.

So, I think the key is a combination of the No-Captcha, which analyses your activity on the sign-up page, combined with an analysis of the chosen username and email address, and an IP check. But don’t use it to stop the sign-up, let them in and then use it to decide whether or not to ban them.

permalink
report
reply

Asklemmy

!asklemmy@lemmy.ml

Create post

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it’s welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

Icon by @Double_A@discuss.tchncs.de

Community stats

  • 9.6K

    Monthly active users

  • 5.5K

    Posts

  • 301K

    Comments