Update to Opus Research’s Intelligent Assistance Landscape

Last week the team at Opus Research published an update to the Intelligent Assistance Landscape. This update represents the first major revision since the landscape was first published in partnership with VentureBeat last fall.

This new version includes updates to the industry players that populate various categories across the landscape. Opus has also refined the categories themselves. If you haven’t seen the landscape or had a chance to delve into it, here’s a quick synopsis.

Intelligent Assistance Landscape

Click to open a full view of the landscape

The top half of the diagram identifies core technologies that enable intelligent assistance. Opus distinguishes two main groups of enabling technologies.

Conversational technologies underpin the natural language exchange between humans and machines. Speech I/O services facilitate the understanding of spoken words and enable machines to talk. Text I/O services support natural language input and understanding via text. This category can also include dialog management services and chatbots. Avatars provide embodiment for intelligent agents, while emotion and sentiment analysis enable software to interpret and act upon knowledge of human emotions and context.

Intelligent Assistance technologies are the powerful core services that help machines understand meaning and intent and learn how to serve us better. These technologies include Speech Analytics, Natural Language Processing, Machine Learning, Semantic Search and Knowledge Management.

The bottom half of the Intelligent Assistance Landscape provides a taxonomy for the various types of smart assistants. While the terminology used for these services is fluid, Opus Research has put a stake in the ground by establishing specific criteria for each category.

Opus defines Mobile and Personal Assistants as smart agents that understand us and whose primary purpose is to help us control the smart objects around us. Assistants such as Siri and Google Now, for example, activate functions on our mobile phones, Amazon’s Alexa controls objects in our smart home, and assistants in cars control the features of our connected vehicle.

Personal Advisors focus on helping us manage complex tasks. These assistants tend to be more specialized and they are generally product agnostic. For example, a specialized personal travel advisor can assist with planning and booking trips and they suggest products and services from a wide array of providers.

Virtual Agents and Customer Assistants are customer-facing, self-service assistants. These assistants represent one company or brand. Their knowledge of the company’s products and services is typically fairly broad and they focus on providing information that customers ask most frequently.

Employee Assistants help people do their jobs within an enterprise. These assistants are generally integrated with the enterprise software applications that employees rely on most and they can also aggregate information to make it more readily available.

The domain of intelligent assistants is gaining increasing attention. The update to Opus Research’s Intelligent Assistance Landscape adds some insightful clarity around this complex topic.

Two Sources of News on Chatbots and Messaging

Over the past month or so I’ve taken advantage of two new sources of information about what’s going on in the world of chatbots and messaging. There seems to be a trend where folks provide curated links to interesting, recent posts and articles around specific technology themes. Two examples of curated weekly lists are Chat Bots Weekly and Messaging Weekly. I think I ran across both of these lists on Product Hunt, which seems to be a great place these days for discovering products and lists on the cutting bot edge.

Messaging and ChatbotsChat Bots Weekly is curated by Omar Pera. Each week Omar selects a handful of recent articles on chatbots from publishers and blog sites. Omar is following the huge upswing in the bot hype cycle to bring readers stories focused on bots, conversational interfaces, and what it all means for businesses and developers.

Messaging Weekly is curated by the team at Smooch. As with Chat Bots Weekly, Messaging Weekly typically offers up four or so articles from around the web that deal with conversational UI, how to design and build conversational UIs, and who’s doing what in the space.

Based on the subject matter of each these two weekly lists, there can be a little bit of overlap in the content. And since I follow this space pretty closely, the lists sometimes contain articles I’ve already run across during the week. But I’m a fan of both of the lists and recommend them. You can sign up to have each list delivered to your email account of choice by going to their website.

It’s great that Omar and the team at Smooch are taking the time to compile these weekly lists to help us all stay in the loop. With so much happening these days in the world of conversational UI, it’s hard to keep up! But we wouldn’t want to miss anything.

On a side note, those of you who have been following my blog may have noticed that I’m not posting here as often as I used to. You can find my posts on the topic of intelligent assistants, conversational UI, bots and so forth on the Opus Research site. Apart from my work as an analyst at Opus, I’m busy with a new technology startup called Hutch.AI. We’re putting finishing touches on a bedtime storytelling skill for the Amazon Echo. I’ll be sure to post about it once it’s launched.

Ben Eidelson’s Look at the Messaging Landscape of 2016

Ben Eidelson published a very interesting article on Medium last week called The Messaging Landscape in 2016. Eidelson provides a great overview of why messaging has become the world’s most popular form of communication. He also offers insights into the expansion of messaging beyond person-to-person communication, as well as a look at the platforms and technologies poised to support this growth.

MessagingAfter reading Eidelson’s observations on what’s so great about messaging, his insights seem obvious. But I’d never connected the dots the way that Eidelson does to really understand why messaging is such a compelling form of communication.

Here’s my summary of Eidelson’s key points on the beauty of messaging:

  1. Messaging is essentially asynchronous, but it can be synchronous when needed. There so many benefits to reaching out to someone in an asynchronous manner. It takes lots of pressure off, it feels more polite and respectful of the other person’s time and space, and it requires so much less commitment than making a voice call.
  2. The messaging interactions we have with acquaintances and loved ones linger in our messaging apps as long-lived conversations. You can always refer back to previous conversations, so you have the whole history of your interactions with that person all in one convenient place.
  3. The conversation list in your messaging app becomes your default interface into the people most important in your life. The messaging app intuitively shows the people you’re currently or recently interacting with at the top. This native ordering makes messaging apps the most natural social platform of all.

Eidelson also looks at the hype around the potential for messaging to expand into business-to-consumer interactions. He’s a believer in the many benefits that messaging-based interactions can have for both businesses and the customers they serve.

The article also provides a good overview of the messaging landscape. Eidelson puts the players in this space into three main categories: end-user messaging apps, platform APIs, and assistants for “X.” His index of companies at the end of the post is a good guide.

If you’re interested in the world of messaging, the supporting vendors, and the potential for market opportunities, Eidelson’s post is definitely worth a read.

Lauren Kunze of Pandorabots On Chatbots

Last week Lauren Kunze of Pandorabots wrote a great article for Techcrunch On Chatbots. If anybody knows a thing or two about chatbots, it’s Lauren. I like the analogy she uses at the beginning of the article. Chatbots, she writes, are like the proverbial ugly duckling. Suddenly out of nowhere these much maligned creatures are taking our messaging platforms by storm and strutting about like beautiful swans.

chatbotsKunze goes on to address and debunk several myths of chatbots. One of the myths she confronts is the notion that chatbots are the same thing as bots. To be honest, the distinction between the two species had started to blur in my mind.

For Kunze, chatbots are first and foremost conversational. They exist to interact with humans in a conversational way, whether that be in the form of text or speech. So a bot that does things but isn’t conversational doesn’t fit well into Kunze’s chatbot category.

And just how easy is it to build one? There may be more work involved than you’ve been led to believe. There are tools to support your efforts, though, if you know where to look.

Can chatbots really provide value to businesses and their customers? What tasks are they well-suited for and where do their weaknesses lie?

I highly encourage you to read the original article to learn more about misconceptions you may have about chatbots and to understand why you may be missing a golden opportunity.

 

 

The Case for Conversational Interfaces

IPG Media Lab hosted a panel discussion on the topic of Conversational Interfaces. The panelists included representatives from Msg.ai, X.ai, and SoundHound. The general consensus among panelists was that messaging is solidifying its place as the preferred mode of mobile communication. It’s true that voice interfaces are rapidly improving and gaining traction. And  email is probably still the channel that businesses use most often to schedule meetings. But consumers are flocking to messaging platforms to communicate with friends and, increasingly, even to do business.

Text BubblesCompanies like Msg.ai and Imperson are popping up to help brands design conversational characters that can interact with consumers via popular messaging platforms. During the IPG Media Lab panel, Msg.ai founder and CEO Puneet Mehta spoke about a campaign his company worked on for Sony Pictures to promote the Goosebumps film. Msg.ai created a conversational chatbot to represent the snarky Slappy character from the film. This promotion was similar to the one involving Imperson’s promotion of The Muppets Show that I wrote about a few months ago.

What are the compelling reasons to start looking at shifting brand promotion to messaging platforms? How can you leverage existing intelligent assistant technologies to get a leg up on conversational interfaces? I examine these questions in more depth in my recent post Messaging: The Future of Brand Engagement? on the Opus Research site.

Conversica’s Virtual Sales Assistant

A few weeks ago, Nellie Bowles of the Guardian wrote an article called With love from my robot: virtual assistants may secretly be emailing you. Bowles focused on technologies from X.ai and Clara Labs. Both companies offer virtual meeting coordinators that use natural language understanding and machine learning algorithms to coordinate meeting times by emailing all meeting participants. People who receive the emails often aren’t aware that the email was written by a bot.

ConversicaThis past week I had an opportunity to talk with some of the team from Conversica. Like X.ai and Clara, Conversica offers technology that uses a smart virtual assistant to carry out routine tasks using email. In the case of Conversica, the assistant focuses on augmenting a company’s sales staff. The virtual sales assistant contacts leads by composing, sending, and responding to emails.

Conversica has designed the technology to be so personable and effective at crafting emails that most people assume they’re interacting with a human sales associate. The bot never writes the exact same email twice, but varies greetings, phrasing, and other aspects of each communication to give them a spontaneous and genuine feel.

Having a virtual sales assistant offers many benefits to a company that lives or dies on how well they follow up and close leads. In some cases, Conversica’s virtual sales assistant actually has an edge over its human colleagues. The fact is, Conversica’s bot never gets its feelings hurt when a prospect ignores its emails or says no. As a result, the virtual sales assistant is remarkably persistent.

The benefit to the prospect is that, no matter how far they are down in the lead queue, the company genuinely cares about winning their business and follows up with them. If they’re truly interested in the product, the sales bot connects them with a real person and makes sure that person doesn’t drop the ball.

To find out more about the history of Conversica’s company and underlying technology, see my full article Conversica Ramps Up Its Virtual Sales Assistant To Keep Tabs on Prospects on the Opus Research blog.

Teaching Machines to Understand Us Better

Last week I wrote about the importance of emotional intelligence in virtual assistants and robots on the Opus Research blog. At the recent World Economic Forum in Davos there was an issue briefing on infusing emotional intelligence into AI. It was a lively and interesting discussion. You can watch a video of the half-hour panel. I’ll summarize my key takeaways.

The panel members were three prominent academics in the field of emotional intelligence in computer technology:

  • Justine Cassell, Associate Dean, Technology, Strategy and Impact, School of Computer Science, Carnegie Mellon University, USA
  • Vanessa Evers, Professor of Human Media Interaction, University of Twente, Netherlands
  • Maja Pantic, Professor of Affective and Behavioral Computing, Imperial College London, United Kingdom

TrustMaja Pantic develops technology that enables machines to track areas of the human body that “broadcast” underlying emotions. The technology also seeks to interpret the emotions and feelings of a person based on those inputs.

Vanessa Evans has been working with Pantic on specific projects that apply a machine’s ability to understand emotion and even social context. Evans emphasizes how critical it is for machines to understand social situations in order to interact with human beings effectively.

One interesting project she sites involves an autonomous shuttle vehicle that picks up and delivers people to terminals at Schiphol Airport. They are training the shuttle to recognize family units. It wouldn’t be effective if the shuttle made room for mom and dad and then raced off leaving two screaming children behind. Evans also cites the example of the shuttle going around someone who is taking a photo instead of barging right in front of them. Awareness of social situations is critical if we’re to accept thinking machines into our lives.

Justine Cassell builds virtual humans and her goal is to construct systems that evoke empathy in humans (not to build systems that demonstrate or feel empathy themselves). This is an interesting distinction. Empathy is what makes us human, Cassell notes, and many people have a difficult time feeling empathy or interacting effectively with other people. This is especially true of individuals with autism or even those with high functioning forms of Aspergers.

In Cassell’s work, she has shown that interactions with virtual humans can help people with autism better grasp the cues of emotion that can be so elusive to them under normal conditions. She has also created virtual peers for at-risk children in an educational environment.The virtual peer gets to know the child and develop a rapport, using what Cassell calls “social scaffolding” to improve learning. For example, if a child feels marginalized for speaking a dialect different from that of the teacher, the virtual peer will speak to the child in his or her dialect, but then model how to switch to standard English when interacting with the teacher. The child is taught to stay in touch with her home culture, but also learns how to succeed in the classroom.

Another notable comment by Cassell was that she never builds virtual humans that look too realistic. Her intent is not to fool someone into believing they are interacting with a real human. People need to be aware of the limits of the virtual human, while at the same time allowing the avatar to unconsciously evoke a human response and interaction.

The panel cited other examples from research that illustrate how effective virtual assistants can be in helping humans improve their social interactions. In the future, it may be possible for our intelligent assistants to give us tips on how to interact more effectively with those around us. For example, a smart assistant might buzz us if it senses we’re being too dominant or angry. The technology isn’t quite there yet, but it could be headed in that direction.

Overall the panelists were optimistic about the direction of artificial intelligence. They also expressed optimism in our ability to ensure our future virtual and robotic companions understand us and work with us effectively. It’s not about making artificial intelligence experience human emotion, they emphasized, but about building machines that understand us better.