When Humans Fail the Turing Test

During Turing tests at Bletchley Park, human judges are sometimes tricked into believing that they’re conversing with a real person, when in fact their dialogue partner is a software chatbot. That’s what Turing tests are all about, right? But what might be even more intriguing is the fact that these same judges often think the humans they’re talking to are actually machines.

An intriguing study was recently published that analyzes these Turing test misidentifications. It’s based on transcripts of five-minute text-based conversations that occurred in 2012 at a Bletchley Park, England. The authors of the study analyze the conversations and form hypotheses on why the human interrogators thought their equally human interlocutor was a chatbot.

Talk Like a HumanIn looking through the transcripts of these conversations, it’s very clear that the human participants are trying to trick the judges into thinking they are chatbots. That’s really the only conclusion you can draw from reading the transcripts.  Apparently, the humans weren’t given any direct instructions to try to mislead the judges. What they were told, according to the study’s authors, was to be themselves, but they were “asked not to make it easy for the machines because it was the machines who/which were competing against them for humanness.” I suppose you could interpret this instruction in numerous ways, but the way that most human participants seemed to interpret it was by feigning to be machines.

In my opinion, this interpretation is not effective at all. Would it make it harder for judges to identify the real chatbots if the humans acted like chatbots too? That doesn’t make sense to me. Chatbots are so obviously not human, that it seems you’d put the machines at a huge disadvantage by acting as much like a human as you could! Acting human shouldn’t be that hard for a real person, and it would presumably make conversations with chatbots more detectable. How about just exhibiting some evidence that you actually care about the person you’re talking to and about what they’re feeling and thinking? Wouldn’t that palpable shred of humanness stand out sharply against the cold repartee of a chatbot?

Based on the evidence, the Bletchley Park administered Turing tests don’t seem like a truly fair game and they also seem to be pretty screwy. But it’s nonetheless interesting to observe the ploys that the human interlocutors use to successfully fool the judges into classifying them as chatbots. (Again, I’m not sure why they’d want to do this, but apparently that was their goal). At the conclusion of the paper, the authors provide botmasters with concrete Do’s and Don’ts for building their chatbots.

The paper provides detailed analysis of 13 text conversation transcripts. Certain types of conversational behavior, it turns out, are often perceived by humans as a flag that they are talking to a machine. If enough of these flags appear in the short exchange, the interrogator is likely to classify their interlocutor as a chatbot. Here’s a list of the dialogue behaviors that raise suspicion:

  • Not answering the question, but instead bringing up another, seemingly unrelated topic. This appears to be a diversionary tactic, even if it might be a justified desire to change the topic.
  • Trying to be funny. Attempts at humor are viewed as a way to get around having to answer the question.
  • Appearing to know too many facts. If you sound like an encyclopedia, you’re probably a machine with access to vast databases.
  • Answering with short, bland, incomplete sentences that don’t really say anything. This just doesn’t seem human, especially if it’s your dominate communication mode.
  • Being a smart aleck. Offering responses like “it’s none of your business” or “so what’s it to you, anyway?” comes off sounding like a cheap diversionary tactic.

What does this mean for botmasters? I suppose it means that the typical “easy” ways to keep a chatbot conversation going are not very likely to succeed. Given the recent success of the chatbot Eugene Goostman, though, I’m not so sure this is completely true. The Goostman chatbot uses all the ploys described above, as you can see in a conversation documented by Scott Aaronson. So perhaps one of the tricks in convincing humans that your chatbot is a real person is to provide it with a strong characterization and a biographical backstory that might explain away its annoying quirkiness. If you’re a human though, it seems your best strategy is to ditch the quirkiness. Just be a person. If you’re human, that should come naturally.

 

Share your thoughts on this topic

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s