Customer-Facing Virtual Agents – Dystopias for Discussion

In my last blog post, I briefly touched on the topic of personal data and how companies like Google and Apple are thinking about acquiring and using it to increase the effectiveness of their intelligent personal assistants. I brought up the subject of trust and transparency with regard to how corporations handle our personal data and mentioned that a set of industry guidelines might be useful.

Dystopian ScenarioIn this post I want to explore possible future ethical issues surrounding customer-facing virtual agents. This topic is admittedly speculative, but sometimes speculation can be entertaining.

Customer-Facing virtual agents, also known as intelligent virtual assistants (IVAs) or web self-service agents, are gaining traction in the marketplace. For large organizations that field high volumes of customer inquiries, the return on investment for implementing IVA technologies is compelling. The typical use case for an IVA is to provide 24/7 customer self-service via web or mobile platforms. The usefulness of the IVA is often measured in a metric known as call deflection, meaning the percentage of incoming inquiries the IVA can answer without the need to escalate them to a live agent.

As artificial intelligence technology matures, IVAs are able to perform more advanced services. IVAs can gather information about the customer and the context of the customer’s situation to provide predictive functions. For example, if you’ve overdrawn your checking account and you visit the bank’s mobile support app, a smart IVA might say something like “I see you’ve overdrawn your checking account. May I assist you in setting up overdraft protection?”

IVAs can also leverage recommendation algorithms to present you with offers that they predict might resonate with you and inspire you to make a purchase.

Now comes the interesting question. What are the ethics around IVAs, especially as the technology advances? Let’s imagine a dystopian scenario where the technology has both the power and the intention to manipulate an unsuspecting customer.

Let’s imagine that the IVA has compiled data that shows Mary tends to shop in an impulsive way on Thursday nights. (The IVA probably doesn’t know that Mary has a monthly board meeting on Thursday that really stresses her out and that shopping is what she does to let off steam, but its data analysis has alerted it to the trend). The IVA proactively engages Mary on Thursday night and starts suggesting products that it predicts she’ll like based on past purchases. It also remembers that Mary likes to talk about her dog, so while it’s tempting her to load up her shopping cart with things she doesn’t need, it distracts her by asking chatty questions about Lucky. Within a couple of months, Mary has become addicted to her Thursday night shopping binges, but her credit card debt has exploded.

This example is far-fetched and just used for illustrative purposes. But who will decide what guardrails IVAs need to stay inside? Should we trust the companies that deploy customer IVAs to follow ethical guidelines that preclude them from leveraging IVAs in a manipulative, but potentially lucrative, manner? Do shoppers need their own loyal personal intelligent assistants to protect them from the potential manipulations of retail-oriented IVAs? Who knows what the future will really bring, but these scenarios could make a fertile backdrop for good science fiction.

Share your thoughts on this topic

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s