Now On Tap, Apple Proactive, Privacy and Trust

Aarti Shahani of NPR wrote an interesting piece last week on the different approach Google and Apple are taking with respect to how their personal intelligent assistants will access and use our data.

TrustGoogle seems to be reaffirming that personal assistants require deep linking into our data, our apps, and our location in order to be truly predictive and useful. Now On Tap, the extension of Google Now capabilities announced for the fall, will leverage data from the full spectrum of  our app ecosystem to be an assistant that knows where we are, what we’re doing, and how it can help us.

Apple’s announcement at WWDC 2015 of a proactive assistant, on the other hand, emphasized their desire to keep interactions anonymous. Apple’s assistant won’t read our emails or store data about us in the cloud. It will, however, have to know something about us to be effective. It will be able to access boarding passes and other information that we store in Passbook, for example, and it will know what apps we tend to use the most.

As personal intelligent assistants become increasingly capable, the discussions about privacy aren’t likely to go away. Will the majority of people forego privacy concerns in exchange for the  benefits of an incredibly useful personal assistant in their pockets or on their wrists? Or will most people see the sacrifice as being too great?

There’s no clear answer yet. Apple obviously believes that concerns over privacy are growing and that addressing these concerns can give them the upperhand in a competitive battle. Google believes in the power of the Knowledge Graph. They seem to be betting that we’ll want the predictive, intelligent services that only access to a complete Knowledge Graph can provide.

At some point, the discussion may focus on trust. Intelligent assistants need to know about us. But do we trust the corporations behind these assistants with our data? We’re not naive enough to believe that their use of our data is completely altruistic. Yes, they give us a free smart assistant. But we suspect that they also use our data to make a profit, most often in ways that aren’t even visible to us. How much of this profit-making are we willing to accept?  Will we accept all of it, as long as it doesn’t seem to do us harm?

What if corporations could be more transparent about the way they access our data and the ways they profit from it? Would that help at all to build trust and allay concerns? Perhaps at some point we can establish industry guidelines for transparency when it comes to sharing how user data is:

  • Acquired
  • Utilized by intelligent assistants to provide predictive assistance
  • Monetized by the acquiring organization or shared with other organizations
  • Safeguarded against unwanted access

Would such guidelines help us be more trusting of providers of intelligent personal assistants? That remains to be seen. And what about the use of personal data by customer-facing (self-service) intelligent virtual assistants? I’ll explore that topic in the next blog post.

Share your thoughts on this topic

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s