I mean, there might be a secret AI technology that is so advanced to the point that it can mimic a real human, make posts and comments that looks like its written by a human and even intentionally doing speling mistakes to simulate human errors. How do we know that such AI hasn’t already infiltrated the internet and everything that you see is posted by this AI? If such AI actually exists, it’s probably so advanced that it almost never fails barring rare situations where there is an unexpected errrrrrrrrrorrrrrrrrrrr…
[Error: The program “Human_Simulation_AI” is unresponsive]
We can simply treat those “real people” as bots, problem solved. :-)
But serious now: the point is that, if it quacks like a duck, walks like a duck, then you treat it like a duck. Or in this case like a human.
I made an observation that Turing test is too flawed a tool to be any reliable. If you want to find out who is who, you need something better, more like Voight-Kampff…
Sure. The test itself doesn’t matter that much, contextually speaking; just that you have some way to distinguish between humans and bots, and yet the internet would be filled with bots that pass as humans.
I guess that the RL equivalent of the Voight-Kampff would be trolling? We have no access to respiration or heart rate across the internet (and if we had, it could be counterfeit), but humans would react differently to being trolled than bots would. Unless the bots are so advanced that they react to trolling the same as we do, and show angry words in response.
I guess that the RL equivalent of the Voight-Kampff would be trolling? (…)
Interesting. I didn’t think about it, but pushing correct buttons and observation of the reactions might indeed be good foundation for some “humanity” test.
You may be onto something, man. Good job! 👏