The blogosphere has been full of the news that a Russian chatbot convinced 33% of judges at the recent Royal Society sponsored Turing test that it was human. There’s apparently nothing unusual or advanced about the technology behind Eugene Goostman, the chatbot that’s scripted to have the personality of a 13-year old Ukranian with poor English skills. It’s a pattern-matching chatbot with a typical backstory (teenager, flippant, weak English) that makes its confusing and off-topic answers seem more forgivable.
Since all the media hype about “the computer that passed the Turing test,” there’s been a bit of a backlash from more astute observers of the AI world.
Doug Aamoth published a short piece on Time online that recounts his brief conversation with the Goostman chatbot. The conversation has all the markings of a shaky exchange with a pattern-matching chatbot. There’s very little in the conversation to make Doug believe he’s talking with a human. As I wrote in my post The Problem With Today’s Chatbots, pattern-matching dialog programs are just very, very limited in their ability to mimic real human conversation. To be convincing, a program has to be able to react to completely unanticipated questions and comments. Simply diverting the conversation to another topic, or coming up with some generic, hollow comment, isn’t how conversation works.
An even more scathing unveiling of the Goostman chatbot’s weakness was posted by Scott Aaronson, a theoretical computer scientist. Aaronson applies the technique of asking fairly straightforward factual questions to quickly unmask Goostman as a chatbot. His conversation also encapsulates a compelling debate about the inherent flaw of chatbots, as opposed to other potentially promising virtual agent and cognitive computing technologies.
If Eugene Goostman’s performance tells us anything, it’s probably that humans can be fooled by chatbots, especially when they want to be. But we already knew that. There are all kinds of bots out in the world and they fool people all the time. Maybe what we need is a training course on “How to Know You’re Talking to a Chatbot.” But that might just spoil people’s fun.
Dean Burnett wrote the cleverest response to the recent hype about the 13-year old chatbot wonder. Check it out for a good laugh. And for a literary take on chatbots and the Turing test, I highly recommend Scott Hutchins’ novel A Working Theory of Love.