Can Virtual Agent Chatbots Represent Brands?

The WSJ recently ran an article describing how the chat app Kik now lets users converse with chatbots that represent brands. Christopher Mims, the article’s author, terms this interaction with chatbots “chatvertising.”  Kik is apparently the most popular chat app with teens in the U.S., and interacting with chatbots seems to be popular.

KikThis brings up the question: what are effective use cases for the current generation of chatbots (or virtual agents / intelligent virtual assistants)? Even if Kik users seem to have just recently discovered enterprise virtual agents that provide information about brands, forward-thinking companies have been employing this technology for years. Web self-service is a growing trend across web, mobile, and social channels and organizations and the customers they serve can leverage big benefits from text or voice-based virtual assistants.

Today’s virtual assistants, however, are limited in their ability to converse convincingly. They are good at answering specific, common questions that customers may have about products and services.  Future generations of virtual agents are likely to be more capable and more skilled at carrying out humanlike conversations. But can a virtual agent chatbot effectively represent a brand? When I use the phrase “represent a brand,” I mean act as the face and “spokesperson” for a brand and engage customers in a way that will make them want to continually interact with the brand.

Ted Livingston, the founder of Kik, notes in the WSJ article that they’ve purposely dumbed the chatbots down for the time being. Mims reports that Kik itself has a chatbot that can tell jokes and apparently engage in very simple dialogue. For other brands, such as Moviefone, Kik chat app users can address the brand’s chatbot, but the bot just pushes out predefined content in response. Livingston says that interactions are purposely very limited to ensure that the chatbots don’t say something that may portray the brand in a negative light. It wouldn’t be advisable, for instance, for auto-learning to be enabled for these chatbots that might lead them to pick up inappropriate language or jokes that they would then add to their database of possible responses.

A one-dimensional chatbot with no real conversational repertoire or personality seems like it wouldn’t make a very good representative for a brand. But perhaps the perceived novelty of chatbots in itself is enough to engage younger users. The article sites a number of instances where the availability of a brand agent increased user engagement with the brand significantly. There’s no data to provide insight on how deep the engagement went, but my assumption is that it was probably fairly superficial.

Virtual assistant technologies, machine learning, and dialogue generation systems are all advancing rapidly. Having a chatbot truly represent a brand may be a bit far fetched in our current environment, but it might seem like an obvious strategy in the not so distant future.

When Humans Fail the Turing Test

During Turing tests at Bletchley Park, human judges are sometimes tricked into believing that they’re conversing with a real person, when in fact their dialogue partner is a software chatbot. That’s what Turing tests are all about, right? But what might be even more intriguing is the fact that these same judges often think the humans they’re talking to are actually machines.

An intriguing study was recently published that analyzes these Turing test misidentifications. It’s based on transcripts of five-minute text-based conversations that occurred in 2012 at a Bletchley Park, England. The authors of the study analyze the conversations and form hypotheses on why the human interrogators thought their equally human interlocutor was a chatbot.

Talk Like a HumanIn looking through the transcripts of these conversations, it’s very clear that the human participants are trying to trick the judges into thinking they are chatbots. That’s really the only conclusion you can draw from reading the transcripts.  Apparently, the humans weren’t given any direct instructions to try to mislead the judges. What they were told, according to the study’s authors, was to be themselves, but they were “asked not to make it easy for the machines because it was the machines who/which were competing against them for humanness.” I suppose you could interpret this instruction in numerous ways, but the way that most human participants seemed to interpret it was by feigning to be machines.

In my opinion, this interpretation is not effective at all. Would it make it harder for judges to identify the real chatbots if the humans acted like chatbots too? That doesn’t make sense to me. Chatbots are so obviously not human, that it seems you’d put the machines at a huge disadvantage by acting as much like a human as you could! Acting human shouldn’t be that hard for a real person, and it would presumably make conversations with chatbots more detectable. How about just exhibiting some evidence that you actually care about the person you’re talking to and about what they’re feeling and thinking? Wouldn’t that palpable shred of humanness stand out sharply against the cold repartee of a chatbot?

Based on the evidence, the Bletchley Park administered Turing tests don’t seem like a truly fair game and they also seem to be pretty screwy. But it’s nonetheless interesting to observe the ploys that the human interlocutors use to successfully fool the judges into classifying them as chatbots. (Again, I’m not sure why they’d want to do this, but apparently that was their goal). At the conclusion of the paper, the authors provide botmasters with concrete Do’s and Don’ts for building their chatbots.

The paper provides detailed analysis of 13 text conversation transcripts. Certain types of conversational behavior, it turns out, are often perceived by humans as a flag that they are talking to a machine. If enough of these flags appear in the short exchange, the interrogator is likely to classify their interlocutor as a chatbot. Here’s a list of the dialogue behaviors that raise suspicion:

  • Not answering the question, but instead bringing up another, seemingly unrelated topic. This appears to be a diversionary tactic, even if it might be a justified desire to change the topic.
  • Trying to be funny. Attempts at humor are viewed as a way to get around having to answer the question.
  • Appearing to know too many facts. If you sound like an encyclopedia, you’re probably a machine with access to vast databases.
  • Answering with short, bland, incomplete sentences that don’t really say anything. This just doesn’t seem human, especially if it’s your dominate communication mode.
  • Being a smart aleck. Offering responses like “it’s none of your business” or “so what’s it to you, anyway?” comes off sounding like a cheap diversionary tactic.

What does this mean for botmasters? I suppose it means that the typical “easy” ways to keep a chatbot conversation going are not very likely to succeed. Given the recent success of the chatbot Eugene Goostman, though, I’m not so sure this is completely true. The Goostman chatbot uses all the ploys described above, as you can see in a conversation documented by Scott Aaronson. So perhaps one of the tricks in convincing humans that your chatbot is a real person is to provide it with a strong characterization and a biographical backstory that might explain away its annoying quirkiness. If you’re a human though, it seems your best strategy is to ditch the quirkiness. Just be a person. If you’re human, that should come naturally.

 

How Intelligent Assistants Might Predict What Restaurants You’ll Like

The Technology Review published an article on Jetpac, a new travel app that uses image identifying technology to rate and recommend restaurants and other travel destinations. Whereas most recommendation apps rely on user reviews, Jetpac applies image analysis algorithms based on deep learning to crowdsourced images–in this case photos on Instagram–to make determinations about a venue.

JetpacHow does this work? When it comes to restaurants, the machine algorithm analyzes Instagram photos and tries to identify specific objects. For example, if it picks up a prevalence of martini glasses or wine glasses, the program makes the assumption that the restaurant is higher class. if it finds more plastic cups or beer bottles, it assumes a lower-end establishment. The program can also determine whether the restaurant is pet friendly by counting the number of pets at outdoor tables in photos.

To get a sense for how well people like the restaurant, or whether it’s a fun place to hang out, the program looks at how many people are smiling or laughing in the photos. By observing what people are wearing, the app can also try to make judgments about the general type of clientele that frequent the place (apparently chunky glasses are indicative of hipster types).

Can intelligent assistants leverage this kind of machine learning to help pick out the best places to recommend? Right now, most assistants are limited to Yelp ratings to rank the restaurant results they return. But what if a super powerful intelligent assistant could scan hundreds of Instagram photos when you ask about the best local coffee shop? And what if it knew you well enough to know you’ll probably like the place with outdoor tables and flower boxes? It could use that knowledge to tailor its recommendations just for you.

Though this type of photo analysis technology is in the early stages, apps like Jetpac give a hint of the future possibilities. At some point, they’re sure to be integrated into the smart assistants that serve us.

 

Why Your Intelligent Assistant Needs to Know All About You

Lawrence Flynn, the CEO of Artificial Solutions, recently wrote a guest post for Forbes online. In the post, Flynn focuses on various types of personalization and how this topic relates to intelligent virtual assistants. Artificial Solutions is a prominent player in the enterprise virtual assistant / virtual agent marketplace. I wrote briefly about their Teneo Network last spring.

PrivacyIn his blog post, Flynn spends some time drawing a distinction between what he terms “implicit personalization” versus “explicit personalization.” Flynn defines implicit personalization as the type of understanding that a virtual assistant develops as it interacts with you. For example, if you instruct the assistant to send a text to your brother Joe, it can easily deduce that your brother’s name is Joe. If you frequently have your assistant make a calendar hit for you to meet Joe for Mexican food in the evening, the assistant knows that you interact with your brother a lot and you both enjoy Mexican food. There is obviously a whole range of implicit information that an observant intelligent assistant can catalog about you.

Explicit personalization is not information that’s inferred through conversation or other means. Instead, it’s the knowledge that an intelligent assistant collects when you feed it to the assistant deliberately. I recently interacted with a mobile personal assistant that seems to rely very heavily, if not exclusively, on explicit personalization. The assistant is called Curious Cat and it’s available on the Android platform. When you start up the app, the cuddly looking cat meows and then begins to bombard you with questions. It asks for your address, birth date, occupation, employer, whether you own an apartment or house, whether you have a car, a motorcycle, or a boat. The prying questions go on and on.

Curious Cat probably has lots of great features and capabilities and I don’t want to disparage it. The process of feeding it with information about yourself is tedious, though. Being asked so directly for private information such as your phone number and other data about yourself feels as though your privacy is being invaded. The end result of the explicit personalization (knowledge feeding) process is probably the same as the outcome of implicit personalization. In both cases, the virtual assistant knows a whole lot about you and could potentially use that knowledge to transgress against your privacy. But implicit personalization seems far less intrusive as a process.

Flynn addresses the dilemma that users of intelligent personal assistants face. By providing the software with many details about yourself, you forfeit your privacy and run the risk of having your data used in ways to which you might object. It’s hardly feasible to review the small print of every privacy policy and determine who your data is being shared with or how it’s being protected. But without sharing personal information, the intelligent assistant will be limited in how it can help you. As I’ve written about previously, leveraging a mobile personal assistant requires a certain level of data sharing. Google Now is pretty useless if you’ve not allowing it to read the content of your Gmail messages to proactively track shipments, flights, reservations, and other critical information. Providing Google Now with that access allows it to proactively implement very effective implicit personalization.

As Flynn states, there’s a lot of discussion that still needs to occur about privacy concerns and how intelligent assistants access, store, and use personal information. In the meantime, intelligent assistant vendors and developers need to firm up their privacy policies and ensure that they have an effective strategy in place for protecting user data and being transparent about how they use it.

Social Robots on the Move

MIT Robotics professor Cynthia Breazeal recently launched JIBO, a social robot, via a Kickstarter campaign. Like the social AI cube EmoSpark, JIBO seems to have struck a chord with crowdfunding audiences. With 28 days still to go, the JIBO project has raised $656K, far exceeding its funding goal.

JIBOSo what is it that people find so compelling about social robots? Some folks are probably intrigued by the prospect of having a robot companion that brings to life the type of intelligent robot friends they’re familiar with from science fiction. Others might just find JIBO’s cute robot personality attractive and want one for their own.

JIBO is being pitched as a robot for the family. It has facial recognition software that allows it to tell who people are and to speak to them by name and deliver messages that are meant for them. It’s also supposed to be able to tell stories to children. JIBO is equipped with a round screen for a face and it can change its expression or display graphics to emphasize what it’s saying.

Based on an interview with Dr. Breazeal, it sounds like JIBO has limited conversational ability at this point. It’s programmed to recognize certain commands and carry out specific functions. The idea is that JIBO’s capabilities will expand over time. A Software Developer Kit (SDK) will allow programmers to write new code that accesses JIBO’s sensory systems and brain and add new features to JIBO. I suppose content creators might be able to use the SDK to add new stories to JIBO’s storytelling repertoire too, but that’s not spelled out explicitly.

EmoSpark, which was launched earlier this year on its own successful Indiegogo campaign, isn’t nearly as cute as JIBO. EmoSpark looks more like a funky clock-radio. It’s a talking cube. Like JIBO, it works via a WiFi connection. But based on the demos, EmoSpark is aiming to be somewhat more conversational. It will have access to Freebase in order to respond to fact-based questions. EmoSpark is also being pitched as an emotionally aware robot.

At the present time, though, neither EmoSpark nor JIBO have actually shipped. Both are still in the design and/or production phase. So it remains to be seen what features, and how much real conversational ability, will make it into the shipped versions of both social robots. There’s no doubt that social robots are creating a lot of buzz, though. And the market for these products, even if they’re more like toys than robots at this point, seems strong.

New Intelligent Virtual Assistant Report by Transparency Market Research

Transparency Market Research recently announced the availability of a new report on the Intelligent Virtual Assistant market. The Transparency report comes on the heels of a similar study published by TechNavio this spring. The TechNavio study predicted a compound annual growth rate (CAGR) for the IVA market of 39.32 percent over the period 2013-2018, The Transparency report projects numbers that are a little shy of that, calling for a CAGR of 30.6% from 2013 to 2019. They valued the IVA market at $352 million in 2012 and project it to reach $2.1 billion by 2019.

Market ResearchAccording to the report summary, North America accounted for 39.2% of the total IVA market share, while Asia Pacific is projected to be the fastest growing market for the technology in the coming years. Europe follows North America in its current usage of IVAs.

The report cites the major industry participants as NextIT, Creative Virtual, VirtuOz, Anboto Group, CodeBaby, IntelliResponse, Nuance, SpeakToit, Artificial Solutions, and eGain. Obviously, the market continues to be fragmented, even though the listed companies don’t all address the same target customers or use cases. SpeakToit is somewhat of an outlier in the group, as they provide a mobile personal assistant and embeddable intelligent assistant platform, but don’t really target the enterprise CRM market. For that matter, CodeBaby has a bit of a different target customer as well and, probably for that reason, didn’t appear in either the TechNavio report of the Enterprise Virtual Assistant report published earlier this year by Opus Research.

In investigating the IVA space, Transparency Market Research evaluated three distinct end user segments: individual users, small and medium enterprise users, and large enterprise users. They also looked separately at text-to-speech technology versus speech recognition. I take that to mean that they evaluated IVAs that only take text input separately from those that have built in speech recognition features.

The report also evaluates the market and vendors based on Porter’s Five Forces Analysis, where (according to Wikipedia) the five forces are:

  • Threat of new entrants
  • Threat of substitute products or services
  • Bargaining power of customers (buyers)
  • Bargaining power of suppliers
  • Intensity of competitive rivlary

If you’re curious to find out more, the report is available for purchase directly from Transparency Market Research, with a single user license priced at $4795.

Does SRI’s Kasisto Represent a New Generation of Enterprise Virtual Assistant?

SRI recently unveiled a new virtual personal assistant product called Kasisto. Two statements in the press release caught my attention. The first was a statement that Kasisto is able to perform complex tasks for the user. These are tasks that require complex domain knowledge. The second statement of interest indicated that enterprises can use Kassisto to integrate their own branded conversational virtual personal assistants into their mobile applications.

KasistoBoth of these capabilities are lacking in the current generation of virtual agents / intelligent virtual assistants. While top of the line virtual assistants do an adequate job of answering basic customer questions, this same technology is very limited in the types of transactions it can perform. The virtual assistants I know about can commonly add a meeting to a calendar or dial a phone number. But they can’t change your plane reservation or work with your doctor to set up an appointment that fits both of your schedules. (Maybe X.ai’s assistant can do the latter, but it would have to be via email).

What kind of complex tasks can Kasisto perform and what type of training or configuration is required to get the assistant to execute these tasks properly? As of yet, I haven’t been able to find any demos of Kasisto in action. According to the press release, SRI demonstrated their new intelligent assistant at the FinTech Innovation Lab Demo Day in New York City. They demoed the product with BBVA banking group, which is one of their first customers. I could imagine that a smart virtual banking assistant could process your order for more checks, or stop payment on a specific check. It certainly ought to be able to tell you your account balance too, but what other complex tasks can it perform? Perhaps SRI will release a demo of Kassito performing banking transactions soon. An article in American Banker gives the example that Kasisto can correctly answer a question such as “How much money did I spend at Whole Foods this month?” But providing an answer that’s easily searchable doesn’t really qualify as a complex task.

The fact that SRI has designed Kasisto as “conversational platform” that enterprises can build upon is an interesting and exciting approach. If virtual assistants are going to become more mainstream, I think businesses need an easy way to brand their own virtual agent personality and make that personality available to mobile apps. If there’s a mechanism that simplifies the process of creating a virtual agent tailored to the business, that’s a positive thing.

We’ll be on the lookout for more information about SRI’s Kasisto.