Should the EFF Be Concerned About The Government’s Use of Virtual Agents?

Sgt. STARThe Electronic Frontier Foundation (EFF) has been interested in learning more about how U.S. government agencies use virtual agents. Most recently, the EFF has looked into how the military uses virtual agents in their recruiting and public-facing information delivery efforts. Dave Maass of the EFF posted an update on the EFF-led investigations into the Army’s Sgt. STAR chatbot, which has been available on the GoArmy.com website since August, 2006. The EFF received information from the Department of the Army after issuing a Freedom Of Information Act (FOIA) request.

The FOIA request asked for all of Sgt. STAR’s possible answers to potential questions, as well as usage data and other information. Maass links to all of the documents, including the full database of output scripts, in the above-mentioned article.

On behalf of EFF, Maass expresses several concerns about Sgt. STAR’s use and operations. Those concerns can categorized as follows:

  • Multithreading:  The Sgt. STAR virtual agent can engage in many simultaneous conversations and thus converse with far more people than a human could.
  • Privacy concerns: All of Sgt. STAR’s conversations are logged and humans have access to everything that was communicated to the virtual agent, even if people talking to Sgt. STAR are under the impression that their conversations are confidential.
  • Risk of manipulation of those conversing with the virtual agent: Research shows that people have lowered inhibitions when communicating with a virtual agent, as opposed to when speaking with a human, and they are therefore more likely to divulge personal, confidence, or incriminating information to the chatbot.
  • Potential threat to one’s civil liberties: The same technology that runs Sgt. STAR has been used in virtual agents that the FBI and CIA have used to detect suspected pedophiles and terrorists.

It’s worth dissecting the EFF’s concerns. The first three concerns listed above are broadly applicable to all virtual agent and virtual assistant technologies deployed today in both the public and private sectors. The last issue is most likely isolated to a virtual agent being operated by the government.

On the multithreading point, it’s no surprise that virtual agents are capable of having simultaneous conversations with many people. A virtual agent is a software-based system and, as long as the supporting infrastructure is adequately scalable, the agent can be instantiated over and over again every time a new user shows up to talk. This very ability to multi-thread itself is a big selling point to businesses that want to ensure they can scale to meet customer demand, without having to staff their call center with hundreds of human call agents that may or may not receive a call on a specific date and time.

When it comes to privacy, the truth is that all communication with chatbots can and usually is tracked and then analyzed afterwards. The ability to capture what humans are asking and how they are responding to their virtual agent dialogue partners is, in fact, a key benefit of virtual agent technology for the businesses and organizations that use it. Businesses gain intelligence through understanding the voice of the customer, and some virtual agent companies differentiate themselves by offering intelligence products that analyze conversational data and highlight actionable trends. Whether or not this is considering ‘spying’ on people is probably a matter of perspective. The fact that no chatbot conversations are private is an undeniable fact and one that the industry still hasn’t fully addressed.

Conversations with mobile personal assistants are just as easily tracked and monitored as those with web-based customer service avatars. The revelation last year that Apple stores user conversations with Siri, including a person’s recorded voice, was disconcerting to many. A mitigating factor in that this electronic eavesdropping does not store personally identifiable information–or at least it shouldn’t. Whether or not virtual assistant / virtual agent operators adhere to this policy is currently a matter of self-policing. It would be preferable if industry would establish clear guidelines, pledge to follow them, and provide periodic proof that they are adhering to the privacy policies.

The manipulation concern is one that I find both intriguing and problematic. Studies do seem to indicate that people are more open with virtual agents than they are with other humans. One data point here comes from research by Janneke van der Zwaan that I wrote about a few months ago. Van der Zwaan found that children felt more comfortable telling a cartoon-like chatbot about there experiences and feelings as a victim of bullying than they did speaking with their parents or other adults about the bullying.

If there’s a a proven proclivity for people to open up to virtual agents and trust them, there’s a concern that such agents could be used to manipulate people and to coax them into divulging confidential information that could be used maliciously.

What Maass seems to find most disturbing is that some of the documents released as part of the FOIA request suggest the government deploys virtual agents to bait and ensnare criminals. Maass asks what happens when a virtual agent misinterprets something someone says and labels them as some sort of offender. How many false positives occur as a result of the large volume of conversations that the government’s covert chatbots engage in? What happens to a person once the chatbot flags them as an offender? These are valid concerns that warrant further investigation and the EFF seems like the right organization to pursue them.

It seems fair to apply a higher level of scrutiny to the government, seeing as how the government has the power to take away a person’s freedom. I get that. At the same time, though, the government needs to be able to leverage the same technologies that are available to private industry. If they are precluded from doing so, you end up with technology that is inadequate and that does not serve the public well. Sgt. STAR has a very strong use case: the Army recruiting site was swamped with inquiries following September 11, 2001 and they needed a practical and cost-effective way to provide the public with the answers they were seeking. Just as any responsible organization would do, they looked for the best state-of-the-art technology to meet the need. As the challenges of protecting freedom become more complex in our modern world, it probably makes sense to encourage the government to employ the same artificial intelligence capabilities available to private industry, while holding them accountable for how they use that technology.

 

 

 

 

 

Share your thoughts on this topic

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s