Do Service Avatars Have a Bad Reputation?

Hi I'm a Service AvatarThere are different names out there for intelligent conversational software programs that offer customer support. I typically use the term virtual agent, but the phrase “service avatar” seems to be in the vernacular as well. All Things D published an article today by Jeff Cavins, CEO of FuzeBox, in which he seems to use service avatar as a derogatory term. The thrust of the article (which certainly has a clever title!) seems to be that cloud-based services companies, in particular Software as a Service companies, have lost the fine art of true customer service.

Cavins observes that many cloud and tech companies are so focused on growing their business, that catering to the customer isn’t a priority. Being tech savvy, these companies feel that it’s okay to let the customer fend for him or herself by running them through what Cavins refers to as a “low-touch, self-service experience.” Automation has become so common in everything these tech companies do, that they’ve naturally sought ways to automate the customer support experience too. Cavins claims that customers feel abandoned and that they don’t appreciate being told to “deal with my service avatar.”

Those of us who believe in virtual agent technology might have our feelings hurt by this criticism. But I’ve long been told to take criticism as a gift. It would be great to start collecting objective evidence from consumer interactions with virtual agents to understand what is and what isn’t working for customers. Will a consumer always prefer interacting with a real person over chatting with a service avatar? Are there settings in which a person might actually prefer talking to a virtual agent? If a consumer can get to a virtual agent is seconds to have a simple question answered, wouldn’t that be preferable to waiting many minutes to speak to a human service rep? Is it possible that a person might feel more comfortable conversing with a service avatar about money matters or health issues? Perhaps there’s a threshold of technical capability that has to be reached before a person will really prefer a virtual agent over a live human? For example, if the service avatar / virtual agent is able to process more information more quickly and retrieve the correct answer more reliably than a human, wouldn’t the consumer prefer dealing with the agent?

These are all areas ripe for study. In the meantime, we should take the observations of Jeff Cavins to heart. Let’s not assume that, just because it’s easier and cheaper for us, the consumer will always be fine chatting with a bot.

Is Your Mobile Personal Assistant Spying on You?

SpyApple’s Siri, the iOS mobile personal assistant, presumably engages in thousands of conversations a day. Recent articles indicate that Apple stores the Siri voice data files from these conversations for up to two years. The data is stored in an anonymous way that doesn’t link the conversation to a specific user. At some point, the data files are purged completely. Why does Apple archive Siri conversations? Presumably to learn how people interact with the mobile personal assistant and improve its performance.

What are the implications of Apple’s policy for holding on to our conversations with Siri? First of all, for most of us, it just feels creepy to think that our conversations with a virtual software agent are being tracked and stored. Should it bother us? We all know that the text messages we exchange with friends, family, and whomever are kept on servers in the cloud. We know that these text messages could be used against us as evidence if we were ever to be charged with a crime or civil offense, but we mostly choose to ignore these possible transgressions against our privacy. What about privacy concerns when it comes to how we engage with mobile and web-based virtual agents? Are we in need of a whole new ethics model to help us deal with privacy in the age of non-human conversational partners?

I recently wrote about IBM’s DeepQA (Watson) technology and how it is being trained to analyze medical documentation and assist physicians in diagnosing patient illnesses. Intelligent cognitive computing systems that can quickly process and interpret vast amounts of structured and unstructured medical data can potentially save lives and reduce medical costs. But will patients have to sacrifice privacy to benefit from artificially intelligent systems? A virtual medical agent will undoubtedly store patient data so that it can be referred to for improving future diagnoses. If that data is not linked to a patient’s identity, the patient has little to be concerned about. What happens, though, when the conversation becomes more like a human-to-human interaction? Advances in technology may bring us a fully conversational AI as a doctor, therapist, or other trusted medical service provider. Can we be sure that the virtual agent we confide in will keep our discussions confidential? A virtual agent is not an autonomous being. It’s a piece of software technology that can be controlled by humans and corporations that may see value in the information they collect about us.

Surely the time is fast approaching when virtual conversational agents will be able to talk to us about anything, including providing companionship when we’re lonely or need an attentive ear. Do we want our conversations with surrogate virtual companions to be snatched up by advertisers so that they can target us for products? What if you spoke to a virtual therapist bot about being lonely and were bombarded a few minutes later by ads for dating sites and singles cruises?

Let’s not allow privacy concerns to curb our appetite for virtual agent technologies or stop us from pursuing research and new products in this area. We do, however, need to start thinking through the implications of the data that is generated from these human to machine interactions. How will it be handled and stored, and can we adequately protect the privacy of the people who will come to rely ever more heavily on mobile personal assistants and other virtual agents?