Insurify Car Insurance Chatbot – More Bots Focus on Conquering Verticals

Despite predictions that 2017 may see the demise of the much-hyped chatbot, some chatbot providers seem to be finding a solid market for their creations. I wrote a post a couple of weeks ago on the Opus Research site that talked about chatbots focused on specific verticals. Companies such as Kasisto that combine natural language, AI, and conversational technologies to address very targeted markets (in this case banking and finance) seem to be gaining a ton of traction.

Yesterday I posted on the Opus Research site about Insurify. Insurify recently issued a press release announcing a large round of funding led by MassMutual Ventures and Nationwide Ventures. These insurance giants realize that consumer behavior is continuing to shift away from desktop research and shopping to mobile research and shopping.

While conversational experiences may not be well suited for some shopping activities, they might be a great fit for something like finding car insurance.

If you think about it, what happens when you either call your insurance broker or fill out an online form to get insurance quotes? You end up answering a lot of predictable questions. So why not just let a knowledgeable chatbot ask you the questions as you type the answers into your phone while you wait your turn at the dentist?

According to the press release, 15 of 20 U.S. insurance carriers are already participating in the Insurify platform in some way or another.

Chatbots obviously have the attention of the insurance industry. They must be onto something.

Amazon Connect – Leveraging the Power of Alexa and AWS for Customer Contact

A few weeks ago I heard a rumor that Amazon was preparing to enter the customer contact center market. According to the rumor, Amazon’s solution would utilize the speech recognition and natural language processing tools that support its Alexa Web Service and Lex products. I was a bit surprised to hear that Amazon was interested in the customer service and self-service market given the competitive nature of the landscape.

This week Amazon confirmed the rumor with the official announcement of Amazon Connect. The service, currently available only in the U.S., touts a full-featured customer contact center that runs atop Amazon’s proven AWS cloud-based computing platform. According to the press release, a company can set up a contact center with just a few clicks and then pay by the minute for actual usage, foregoing the lock-in of long-term contracts.

Businesses using Amazon Connect have access to a graphical interface to design contact flows according to their unique processes, without the need for expensive consulting services. They can also leverage Amazon’s Lex, the ASR and NLP technology powering Alexa, to enable their customers to use natural language when engaging with the contact center either via voice or text.

Todd Bishop covered the Amazon announcement in an article on Geekwire. That article has links to two Amazon videos that provide further details of the new service. One video features Amazon’s Jeff Barr, who explains that Amazon Connect offers an IVR, adaptive analytics that can help predict a customer’s needs, natural language interactions, and skills-based call routing. From the sound of it, Amazon is offering a enterprise intelligent assistant solution to compete with other vendors in that market.

The Amazon press release showcases several customers already using its new contact center service. Amazon Connect integrates with many leading CRM solutions, while taking advantage of the AWS platform and accompanying microservices with which many businesses are already familiar. With continued pressure to move to the cloud, Amazon’s comprehensive contact center solution seems compelling.

Will Amazon’s announcement prove disruptive for the automated customer service and enterprise intelligent assistant market? That remains to be seen. But leveraging the technological fruits of its AWS and Alexa platforms to provide a fully cloud-based, natural language-capable contact center seems like a smart move for the retail and technology giant.

A slightly different version of this article was first posted on the Opus Research blog.

Macy’s On Call Offers Location-Based Intelligent Assistance

Macy's On CallMacy’s is experimenting with a new form of location-aware intelligent assistance. Leveraging technology from Satisfi and IBM Watson, Macy’s is rolling out what it calls “Macy’s On Call” in a select number of stores. The service acts like an intelligent assistant that understands the customer’s current location without having to ask.

Customers shopping in a Macy’s store that supports the On Call service can enter questions in natural language in a mobile web user interface. For example, a customer searching for a new toaster oven might ask where the kitchen appliances are located and the On Call automated assistant will direct the customer to right spot.

The new service combines two technologies: Satisfi offers a platform that can ascertain a user’s location from their smartphone data and that can respond to the user’s natural language requests. IBM Watson provides a cognitive computing platform that understands natural language inquiries and searches through complex knowledge sources to find information with the highest probability of answering a specific question.

The combination of location-based awareness, natural language understanding, and the ability to find answers about products, product locations within specific stores, inventory, pricing, etc. enables Macy’s to offer an innovative and powerful new type of intelligent assistance to its shoppers.

During the recent MobileBeat 2016 event there was lots of discussion about engaging with customers while they’re inside the store. Nichele Lindstrom, director of digital with Whole Foods, noted in a presentation that over 50% of online recipe searches happen in the grocery store aisle. Whole Foods decided to launch a Facebook Messenger chatbot to help its shoppers with recipes and other questions.

Macy’s On Call is an example of another natural language-based self-service offering that helps customers when and where they need it most: onsite at a retail location in the direct path to purchase. Now that the technology supports this type of assistance, we’re likely to see more brands extend the reach of self-service to follow customers wherever they go.

This post was originally published at Opus Research.

Two Sources of News on Chatbots and Messaging

Over the past month or so I’ve taken advantage of two new sources of information about what’s going on in the world of chatbots and messaging. There seems to be a trend where folks provide curated links to interesting, recent posts and articles around specific technology themes. Two examples of curated weekly lists are Chat Bots Weekly and Messaging Weekly. I think I ran across both of these lists on Product Hunt, which seems to be a great place these days for discovering products and lists on the cutting bot edge.

Messaging and ChatbotsChat Bots Weekly is curated by Omar Pera. Each week Omar selects a handful of recent articles on chatbots from publishers and blog sites. Omar is following the huge upswing in the bot hype cycle to bring readers stories focused on bots, conversational interfaces, and what it all means for businesses and developers.

Messaging Weekly is curated by the team at Smooch. As with Chat Bots Weekly, Messaging Weekly typically offers up four or so articles from around the web that deal with conversational UI, how to design and build conversational UIs, and who’s doing what in the space.

Based on the subject matter of each these two weekly lists, there can be a little bit of overlap in the content. And since I follow this space pretty closely, the lists sometimes contain articles I’ve already run across during the week. But I’m a fan of both of the lists and recommend them. You can sign up to have each list delivered to your email account of choice by going to their website.

It’s great that Omar and the team at Smooch are taking the time to compile these weekly lists to help us all stay in the loop. With so much happening these days in the world of conversational UI, it’s hard to keep up! But we wouldn’t want to miss anything.

On a side note, those of you who have been following my blog may have noticed that I’m not posting here as often as I used to. You can find my posts on the topic of intelligent assistants, conversational UI, bots and so forth on the Opus Research site. Apart from my work as an analyst at Opus, I’m busy with a new technology startup called Hutch.AI. We’re putting finishing touches on a bedtime storytelling skill for the Amazon Echo. I’ll be sure to post about it once it’s launched.

Lauren Kunze of Pandorabots On Chatbots

Last week Lauren Kunze of Pandorabots wrote a great article for Techcrunch On Chatbots. If anybody knows a thing or two about chatbots, it’s Lauren. I like the analogy she uses at the beginning of the article. Chatbots, she writes, are like the proverbial ugly duckling. Suddenly out of nowhere these much maligned creatures are taking our messaging platforms by storm and strutting about like beautiful swans.

chatbotsKunze goes on to address and debunk several myths of chatbots. One of the myths she confronts is the notion that chatbots are the same thing as bots. To be honest, the distinction between the two species had started to blur in my mind.

For Kunze, chatbots are first and foremost conversational. They exist to interact with humans in a conversational way, whether that be in the form of text or speech. So a bot that does things but isn’t conversational doesn’t fit well into Kunze’s chatbot category.

And just how easy is it to build one? There may be more work involved than you’ve been led to believe. There are tools to support your efforts, though, if you know where to look.

Can chatbots really provide value to businesses and their customers? What tasks are they well-suited for and where do their weaknesses lie?

I highly encourage you to read the original article to learn more about misconceptions you may have about chatbots and to understand why you may be missing a golden opportunity.

 

 

The Case for Conversational Interfaces

IPG Media Lab hosted a panel discussion on the topic of Conversational Interfaces. The panelists included representatives from Msg.ai, X.ai, and SoundHound. The general consensus among panelists was that messaging is solidifying its place as the preferred mode of mobile communication. It’s true that voice interfaces are rapidly improving and gaining traction. And  email is probably still the channel that businesses use most often to schedule meetings. But consumers are flocking to messaging platforms to communicate with friends and, increasingly, even to do business.

Text BubblesCompanies like Msg.ai and Imperson are popping up to help brands design conversational characters that can interact with consumers via popular messaging platforms. During the IPG Media Lab panel, Msg.ai founder and CEO Puneet Mehta spoke about a campaign his company worked on for Sony Pictures to promote the Goosebumps film. Msg.ai created a conversational chatbot to represent the snarky Slappy character from the film. This promotion was similar to the one involving Imperson’s promotion of The Muppets Show that I wrote about a few months ago.

What are the compelling reasons to start looking at shifting brand promotion to messaging platforms? How can you leverage existing intelligent assistant technologies to get a leg up on conversational interfaces? I examine these questions in more depth in my recent post Messaging: The Future of Brand Engagement? on the Opus Research site.

Teaching Machines to Understand Us Better

Last week I wrote about the importance of emotional intelligence in virtual assistants and robots on the Opus Research blog. At the recent World Economic Forum in Davos there was an issue briefing on infusing emotional intelligence into AI. It was a lively and interesting discussion. You can watch a video of the half-hour panel. I’ll summarize my key takeaways.

The panel members were three prominent academics in the field of emotional intelligence in computer technology:

  • Justine Cassell, Associate Dean, Technology, Strategy and Impact, School of Computer Science, Carnegie Mellon University, USA
  • Vanessa Evers, Professor of Human Media Interaction, University of Twente, Netherlands
  • Maja Pantic, Professor of Affective and Behavioral Computing, Imperial College London, United Kingdom

TrustMaja Pantic develops technology that enables machines to track areas of the human body that “broadcast” underlying emotions. The technology also seeks to interpret the emotions and feelings of a person based on those inputs.

Vanessa Evans has been working with Pantic on specific projects that apply a machine’s ability to understand emotion and even social context. Evans emphasizes how critical it is for machines to understand social situations in order to interact with human beings effectively.

One interesting project she sites involves an autonomous shuttle vehicle that picks up and delivers people to terminals at Schiphol Airport. They are training the shuttle to recognize family units. It wouldn’t be effective if the shuttle made room for mom and dad and then raced off leaving two screaming children behind. Evans also cites the example of the shuttle going around someone who is taking a photo instead of barging right in front of them. Awareness of social situations is critical if we’re to accept thinking machines into our lives.

Justine Cassell builds virtual humans and her goal is to construct systems that evoke empathy in humans (not to build systems that demonstrate or feel empathy themselves). This is an interesting distinction. Empathy is what makes us human, Cassell notes, and many people have a difficult time feeling empathy or interacting effectively with other people. This is especially true of individuals with autism or even those with high functioning forms of Aspergers.

In Cassell’s work, she has shown that interactions with virtual humans can help people with autism better grasp the cues of emotion that can be so elusive to them under normal conditions. She has also created virtual peers for at-risk children in an educational environment.The virtual peer gets to know the child and develop a rapport, using what Cassell calls “social scaffolding” to improve learning. For example, if a child feels marginalized for speaking a dialect different from that of the teacher, the virtual peer will speak to the child in his or her dialect, but then model how to switch to standard English when interacting with the teacher. The child is taught to stay in touch with her home culture, but also learns how to succeed in the classroom.

Another notable comment by Cassell was that she never builds virtual humans that look too realistic. Her intent is not to fool someone into believing they are interacting with a real human. People need to be aware of the limits of the virtual human, while at the same time allowing the avatar to unconsciously evoke a human response and interaction.

The panel cited other examples from research that illustrate how effective virtual assistants can be in helping humans improve their social interactions. In the future, it may be possible for our intelligent assistants to give us tips on how to interact more effectively with those around us. For example, a smart assistant might buzz us if it senses we’re being too dominant or angry. The technology isn’t quite there yet, but it could be headed in that direction.

Overall the panelists were optimistic about the direction of artificial intelligence. They also expressed optimism in our ability to ensure our future virtual and robotic companions understand us and work with us effectively. It’s not about making artificial intelligence experience human emotion, they emphasized, but about building machines that understand us better.