Is Your Mobile Personal Assistant Spying on You?

SpyApple’s Siri, the iOS mobile personal assistant, presumably engages in thousands of conversations a day. Recent articles indicate that Apple stores the Siri voice data files from these conversations for up to two years. The data is stored in an anonymous way that doesn’t link the conversation to a specific user. At some point, the data files are purged completely. Why does Apple archive Siri conversations? Presumably to learn how people interact with the mobile personal assistant and improve its performance.

What are the implications of Apple’s policy for holding on to our conversations with Siri? First of all, for most of us, it just feels creepy to think that our conversations with a virtual software agent are being tracked and stored. Should it bother us? We all know that the text messages we exchange with friends, family, and whomever are kept on servers in the cloud. We know that these text messages could be used against us as evidence if we were ever to be charged with a crime or civil offense, but we mostly choose to ignore these possible transgressions against our privacy. What about privacy concerns when it comes to how we engage with mobile and web-based virtual agents? Are we in need of a whole new ethics model to help us deal with privacy in the age of non-human conversational partners?

I recently wrote about IBM’s DeepQA (Watson) technology and how it is being trained to analyze medical documentation and assist physicians in diagnosing patient illnesses. Intelligent cognitive computing systems that can quickly process and interpret vast amounts of structured and unstructured medical data can potentially save lives and reduce medical costs. But will patients have to sacrifice privacy to benefit from artificially intelligent systems? A virtual medical agent will undoubtedly store patient data so that it can be referred to for improving future diagnoses. If that data is not linked to a patient’s identity, the patient has little to be concerned about. What happens, though, when the conversation becomes more like a human-to-human interaction? Advances in technology may bring us a fully conversational AI as a doctor, therapist, or other trusted medical service provider. Can we be sure that the virtual agent we confide in will keep our discussions confidential? A virtual agent is not an autonomous being. It’s a piece of software technology that can be controlled by humans and corporations that may see value in the information they collect about us.

Surely the time is fast approaching when virtual conversational agents will be able to talk to us about anything, including providing companionship when we’re lonely or need an attentive ear. Do we want our conversations with surrogate virtual companions to be snatched up by advertisers so that they can target us for products? What if you spoke to a virtual therapist bot about being lonely and were bombarded a few minutes later by ads for dating sites and singles cruises?

Let’s not allow privacy concerns to curb our appetite for virtual agent technologies or stop us from pursuing research and new products in this area. We do, however, need to start thinking through the implications of the data that is generated from these human to machine interactions. How will it be handled and stored, and can we adequately protect the privacy of the people who will come to rely ever more heavily on mobile personal assistants and other virtual agents?

Share your thoughts on this topic

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s