The Risk of Virtual Personal Assistants as Gatekeepers

Virtual personal assistants( VPAs) are software applications that understand written and spoken text, that speak, answer questions, provide useful information, and perform tasks for us. VPAs rely on capabilities ranging from speech recognition to predictive analytics and machine learning algorithms. Siri and Google Now are the most widely used VPAs. As the technologies advance, our VPAs are expected to become increasingly capable.

Neural NetworksIn a recent article, Tom Pullar-Strecker speculates that VPAs will have deep knowledge of us and our preferences. We will depend on these smart assistants to help us plan and organize our lives and even carry out basic tasks. Our VPA will determine if we’re available to meet with friends and then arrange the entire evening for us, from inviting our guests to making the restaurant reservations. The VPA will also act as gatekeeper to block unwanted corporate advertisements from reaching us. It will filter ads and only show us products that it believes we’ll be interested in, based on its knowledge of us.

We’re entering a new world. A VPA that has all these abilities can be a huge asset to us and to those around us. But are there dangers lurking behind this seemingly positive future scenario? Most concerns that are voiced seem to be around privacy. For the VPA to be truly effective, it will require deep insights into my personality, habits, and health. It will need to know who my family members are, as well as my friends and co-workers. Many are worried about the implications of providing so much data to a VPA.

But there are other risks that aren’t discussed as often as the topic of privacy. A risk that hasn’t been addressed much is the risk of what I’ll call unfair VPA filtering. If my VPA protects me from unwanted ads or solicitations, and if it has my permission to make purchases on my behalf, it will wield a lot of power. Companies are going to want the VPA to approve their products, instead of filtering them out. How will the VPA decide which pair of shoes it should buy for me, when it feels that I’d be happy with any of 5 different selections? The VPA, or whoever controls the VPA, could just buy the shoes from the company that pays the most to have me as their customer. It gets a kickback from every transaction it executes on my behalf.

I explore this risk in a recent guest post on Opus Research entitled: Virtual Personal Assistants: Future Gatekeeper to Your Attention? You can also join the conversation on Opus Research’s LinkedIn Group for Intelligent Assistants Developers and Implementers.

Share your thoughts on this topic

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s