I mean, there might be a secret AI technology that is so advanced to the point that it can mimic a real human, make posts and comments that looks like its written by a human and even intentionally doing speling mistakes to simulate human errors. How do we know that such AI hasn’t already infiltrated the internet and everything that you see is posted by this AI? If such AI actually exists, it’s probably so advanced that it almost never fails barring rare situations where there is an unexpected errrrrrrrrrorrrrrrrrrrr…

[Error: The program “Human_Simulation_AI” is unresponsive]

How do you know everyone IRL isn’t an NPC because this is just a simulation?

permalink
report
reply
6 points

Take the blue pill and find out!

permalink
report
parent
reply
3 points

The last time I took the blue pill I didn’t care if it was a bot or not, I wanted to fuck it.

permalink
report
parent
reply
-3 points

Because such a massive simulation without players serves no purpose that’d justify the waste of the resources needed to run it.

permalink
report
parent
reply
3 points

Maybe it’s someone’s sick fun… or an experiment.

permalink
report
parent
reply
4 points

I saw a movie or, probably, an anime with this theme in the last year. People discover they are a simulation, manage to breach through to this race that is just lost in viewing virtual space. Wreck shit, go back.

permalink
report
parent
reply
0 points
*

If someone is both capable and willing to spend such massive amount of effort for such an experiment, he already has all the answers the experiment might provide. It’s like thinking NASA would create massive telescope and place it on an orbit, just to point it at Earth and record how cats hunt.

Same with fun. Whoever possesses enough resources to waste them on “fun” alone, already has the access to way more interesting pleasures. It’s like thinking Jeff Bezos is going to buy a private island and buy a luxury bunker there, for the purpose of torturing cockroaches.

permalink
report
parent
reply

I’ve “played” plenty of simulations that are just things that run entirely on their own without a player input aside from the starting parameters. Chiefly being the one aptly named “The Game of Life.”

permalink
report
parent
reply
0 points

These are highly primitive and limited simulations, that follow basic patterns and can’t evolve much.

Ours is a reality complicated, vast and chaotic. They can’t be compared.

permalink
report
parent
reply
-12 points

If it’s a simulation your imagination and understanding of the world (simulation) is limited and you have no idea how resource intensive it would be to run, perhaps we’re a kids toy for some being

permalink
report
parent
reply
1 point

Again: the resources, the effort needed to create and sustain such a massive simulation just for fun, belong to civilization of so high advancement, that it renders the idea impossible.

Beings being able to do it would be able to bypass any stage of infancy or childhood, because it’d be obsolete and pointless for them.

permalink
report
parent
reply
18 points
*

At some point it all stops mattering. You treat bots like humans and humans like bots. It’s all about logic and good/bad faith.

I’ve had an embarrassing attempt to identify a bot and learned a fair bit.

There is significant overlap between the smartest bots, and the dumbest humans.

A human can:

  • Get angry that they are being tested
  • Fail an AI-test
  • Intentionally fail an AI-test
  • Pass a test that an AI can also pass, while the tester expects an AI to fail.

It’s too unethical to test, so I feel that the best course of action is to rely on good/bad faith tests, and logic of the argument.

Turing tests are very obsolete. The real question to ask, Do you really believe that the average persons sapience is really that noteworthy?

A well made LLM can exceed a dumb person pretty easily. It can also be more enjoyable to talk with or more loving and supportive.

Of course there are things that current LLMs can’t do well that we could design tests around. Also long conversations have a higher chance to show a failure of the AI. Secret AIs and future AIs might be harder of course.

I believe dead internet theories spirit. Strap in meat-peoples, rides gonna get bumpy.

permalink
report
reply
2 points

You treat bots like humans and humans like bots. It’s all about logic and good/bad faith.

Part of the thing with chatgpt is it’s particularly good at sounding like it knows what is saying, while spewing linguistically-coherent nonsense.

For many (most? Even all to some degree?) of us, we have some idea ingrained in our culture of saying what we think to be true, and refraining from what we don’t. That’s heavily diluted on the internet, but the converse tends to be saying what we think will make people support/agree with us. We’ve grown up (some of us have!) with some feel of how to tell the difference.

GPT (and I guess most human-like chat bots will be similar for now) is more an amoral, or a-scient, attempt to say something coherent based on the training data. It’s different again, but sounds uncannily like what we’re used to from good-faith truth-speakers. I also think it’s like the extreme-end of some cultures that prioritise saying what will make the other person happy, more than what is true.

permalink
report
parent
reply
0 points

The real question to ask, Do you really believe that the average persons sapience is really that noteworthy?

Part of the thing with chatgpt is it’s particularly good at sounding like it knows what is saying, while spewing linguistically-coherent nonsense.

That’s why this is so scary! The average person on the internet is being fake the same way chatGPT based bots would be! haha… :(

Your whole comment is great, you understand the passable, seemingly coherent nature of it. It’s only a hair less coherent than the average person that would argue in bad faith, and if optimised with that specific data would be… scary

Here is something I mentioned before on a different topic to show you the flaws of people, more so than the capabilities of bots. https://lemmy.ml/comment/1318058

The thing that bothers me most is this thought exercise. If govt agencies and militaries are years ahead, and propaganda is so useful, shouldn’t there be an ultra high chance that secret AI chatbots are already practically perfected and mass usable by now?

We have seen such a shift towards a dead internet that these are our final chances. I think we should spend more effort on finding tricks to ID bots and do something about it, else take to the streets.

permalink
report
parent
reply
2 points

((Why does Firefox crash on me!!!))

((Maybe even Firefox knows I typed too long and rambly.))

So, where does that leave us? There’s always been unreliable knowledge from people. Joe in the next village tells tall tales about Martha from Sweden who catches fish with peeled strawberries. Scientific standardisation has helped a lot, and allowed for a sort of globalised reliable knowledge, but its cracks are showing. We trust ‘the experts’, but then find Wikipedia has trolls and WHO is influenced by Chinese diplomacy. So we trust ‘the community’ and find Amazon reviews are bought. So we trust our moderated sublemmits, and find out the content-to-user matching algorithms breed echo chambers. So we trust the government to moderate, but the American Left admit the Democrats are bad, and the Right admit the Republicans are liars. (And I’ve never even been to America!) So at last we go back to Aunt Jenny, who’s deeply afraid that black people will take over the country, and the local sysadmin whose network security is based on the book he read in the '90s.

Maybe we need to relearn tricks from the old irl days, even if that loses us some of what we could gain from globalised knowledge and friendship. Perhaps we can find new ways to apply these to our internet communities. I don’t think I’m saying anything new here, but I guess fostering a culture of thinking about truth and trust is good: maybe I’m helping that.

Almost as an aside (so I don’t ramble twice as long like my crashed-firefox answer!): The best philosophical one-liner I’ve found for first-principleing trust, is, does this person show love? (Kindness, compassion, selflessness.) To me, and/or to others. Then that imparts some assumed value to their worldview and life understanding. Doesn’t make them an expert on any topic, but makes a foundation.

And finally,

Do you really believe that the average persons sapience is really that noteworthy?

Yes. If you mean, is their comment more noteable than most others, in a public debate, then no. But if you’re pointing towards, are their experience, understanding and internal processes valuable, then yes, and that’s important to me. (Though I’m not great enough to hear, consider or interact with everyone!)

The average person on the internet is being fake the same way chatGPT based bots would be!

Do you reckon so? I think fake internet usually talks different to chatGPT, though of course propaganda (national or individual level) tries to mimic which or whatever will be most effective. My point was largely that chatGPT mimics the experts we’ve previously learnt to trust, better than most of fake internet was able to do before, whilst being less sapient (than fake internet) and at the same time being yet more and yet much less trustworthy.

permalink
report
parent
reply
17 points
*

Ah, the dead internet “theory”? Ultimately, it doesn’t matter.

Let’s pretend that you’re the last human on the internet, and everyone else (including me) is a bot. This means that at least some bots pass the Turing test with flying colours; they’re undistinguishable from human beings and do the exact same sort of smart and dumb shit that humans do. Is there any real difference between “this is a human being, I’ll treat them as such” vs. “this is a bot, but it behaves like a human being and I need to treat it as a human being”?

permalink
report
reply
7 points

Turing test isn’t any good to discern a human from a bot, since many real people wouldn’t pass it.

permalink
report
parent
reply
4 points

We can simply treat those “real people” as bots, problem solved. :-)

But serious now: the point is that, if it quacks like a duck, walks like a duck, then you treat it like a duck. Or in this case like a human.

permalink
report
parent
reply
1 point

I made an observation that Turing test is too flawed a tool to be any reliable. If you want to find out who is who, you need something better, more like Voight-Kampff…

permalink
report
parent
reply
3 points

Well it would definitely matter at least for practical purposes, like if you wanted to meet up with somebody.

permalink
report
parent
reply
2 points
*

This is a good answer, because it prevents the dehumanization trap that these theories fall into:

Basically, the belief that some beings don’t have “souls”, and don’t have to be treated with conscience.

The “we are in a simulation” conspiracy fans toy with an idea of NPC that is horrifying: That some humans are just acting like humans very convincingly, but they are just thin shells that don’t really really feel pain or happiness. Whatever you do to them can’t be morally wrong.

It is also similar to how some religions have ideas that people can have their soul taken by Satan and are just demonic possession vessels here to corrupt us. They behave very much like humans but do not be tricked!

Europeans used to think Africans had no souls, they were just animals that were very good at imitating human behavior.

These thoughts are all extremely strong tools for any fascist movement needing some vague excuse to commit atrocities to their opponents and scapegoats.

permalink
report
parent
reply
12 points

I’m not a bot, I… was just here to promote a movie.

permalink
report
reply
6 points
*
Deleted by creator
permalink
report
parent
reply
2 points

Oh bugger.

permalink
report
parent
reply
11 points

Welcome to solipsism. We’re happy to have you.

permalink
report
reply
2 points

Okay you clearly don’t exist, I just need to delete you from my brain.

/s

permalink
report
parent
reply

No Stupid Questions

!nostupidquestions@lemmy.world

Create post

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others’ questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That’s it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it’s in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.

Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.

Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

Community stats

  • 9.8K

    Monthly active users

  • 3K

    Posts

  • 116K

    Comments