Macy’s On Call Offers Location-Based Intelligent Assistance

Macy's On CallMacy’s is experimenting with a new form of location-aware intelligent assistance. Leveraging technology from Satisfi and IBM Watson, Macy’s is rolling out what it calls “Macy’s On Call” in a select number of stores. The service acts like an intelligent assistant that understands the customer’s current location without having to ask.

Customers shopping in a Macy’s store that supports the On Call service can enter questions in natural language in a mobile web user interface. For example, a customer searching for a new toaster oven might ask where the kitchen appliances are located and the On Call automated assistant will direct the customer to right spot.

The new service combines two technologies: Satisfi offers a platform that can ascertain a user’s location from their smartphone data and that can respond to the user’s natural language requests. IBM Watson provides a cognitive computing platform that understands natural language inquiries and searches through complex knowledge sources to find information with the highest probability of answering a specific question.

The combination of location-based awareness, natural language understanding, and the ability to find answers about products, product locations within specific stores, inventory, pricing, etc. enables Macy’s to offer an innovative and powerful new type of intelligent assistance to its shoppers.

During the recent MobileBeat 2016 event there was lots of discussion about engaging with customers while they’re inside the store. Nichele Lindstrom, director of digital with Whole Foods, noted in a presentation that over 50% of online recipe searches happen in the grocery store aisle. Whole Foods decided to launch a Facebook Messenger chatbot to help its shoppers with recipes and other questions.

Macy’s On Call is an example of another natural language-based self-service offering that helps customers when and where they need it most: onsite at a retail location in the direct path to purchase. Now that the technology supports this type of assistance, we’re likely to see more brands extend the reach of self-service to follow customers wherever they go.

This post was originally published at Opus Research.

Two Sources of News on Chatbots and Messaging

Over the past month or so I’ve taken advantage of two new sources of information about what’s going on in the world of chatbots and messaging. There seems to be a trend where folks provide curated links to interesting, recent posts and articles around specific technology themes. Two examples of curated weekly lists are Chat Bots Weekly and Messaging Weekly. I think I ran across both of these lists on Product Hunt, which seems to be a great place these days for discovering products and lists on the cutting bot edge.

Messaging and ChatbotsChat Bots Weekly is curated by Omar Pera. Each week Omar selects a handful of recent articles on chatbots from publishers and blog sites. Omar is following the huge upswing in the bot hype cycle to bring readers stories focused on bots, conversational interfaces, and what it all means for businesses and developers.

Messaging Weekly is curated by the team at Smooch. As with Chat Bots Weekly, Messaging Weekly typically offers up four or so articles from around the web that deal with conversational UI, how to design and build conversational UIs, and who’s doing what in the space.

Based on the subject matter of each these two weekly lists, there can be a little bit of overlap in the content. And since I follow this space pretty closely, the lists sometimes contain articles I’ve already run across during the week. But I’m a fan of both of the lists and recommend them. You can sign up to have each list delivered to your email account of choice by going to their website.

It’s great that Omar and the team at Smooch are taking the time to compile these weekly lists to help us all stay in the loop. With so much happening these days in the world of conversational UI, it’s hard to keep up! But we wouldn’t want to miss anything.

On a side note, those of you who have been following my blog may have noticed that I’m not posting here as often as I used to. You can find my posts on the topic of intelligent assistants, conversational UI, bots and so forth on the Opus Research site. Apart from my work as an analyst at Opus, I’m busy with a new technology startup called Hutch.AI. We’re putting finishing touches on a bedtime storytelling skill for the Amazon Echo. I’ll be sure to post about it once it’s launched.

Lauren Kunze of Pandorabots On Chatbots

Last week Lauren Kunze of Pandorabots wrote a great article for Techcrunch On Chatbots. If anybody knows a thing or two about chatbots, it’s Lauren. I like the analogy she uses at the beginning of the article. Chatbots, she writes, are like the proverbial ugly duckling. Suddenly out of nowhere these much maligned creatures are taking our messaging platforms by storm and strutting about like beautiful swans.

chatbotsKunze goes on to address and debunk several myths of chatbots. One of the myths she confronts is the notion that chatbots are the same thing as bots. To be honest, the distinction between the two species had started to blur in my mind.

For Kunze, chatbots are first and foremost conversational. They exist to interact with humans in a conversational way, whether that be in the form of text or speech. So a bot that does things but isn’t conversational doesn’t fit well into Kunze’s chatbot category.

And just how easy is it to build one? There may be more work involved than you’ve been led to believe. There are tools to support your efforts, though, if you know where to look.

Can chatbots really provide value to businesses and their customers? What tasks are they well-suited for and where do their weaknesses lie?

I highly encourage you to read the original article to learn more about misconceptions you may have about chatbots and to understand why you may be missing a golden opportunity.

 

 

The Case for Conversational Interfaces

IPG Media Lab hosted a panel discussion on the topic of Conversational Interfaces. The panelists included representatives from Msg.ai, X.ai, and SoundHound. The general consensus among panelists was that messaging is solidifying its place as the preferred mode of mobile communication. It’s true that voice interfaces are rapidly improving and gaining traction. And  email is probably still the channel that businesses use most often to schedule meetings. But consumers are flocking to messaging platforms to communicate with friends and, increasingly, even to do business.

Text BubblesCompanies like Msg.ai and Imperson are popping up to help brands design conversational characters that can interact with consumers via popular messaging platforms. During the IPG Media Lab panel, Msg.ai founder and CEO Puneet Mehta spoke about a campaign his company worked on for Sony Pictures to promote the Goosebumps film. Msg.ai created a conversational chatbot to represent the snarky Slappy character from the film. This promotion was similar to the one involving Imperson’s promotion of The Muppets Show that I wrote about a few months ago.

What are the compelling reasons to start looking at shifting brand promotion to messaging platforms? How can you leverage existing intelligent assistant technologies to get a leg up on conversational interfaces? I examine these questions in more depth in my recent post Messaging: The Future of Brand Engagement? on the Opus Research site.

Teaching Machines to Understand Us Better

Last week I wrote about the importance of emotional intelligence in virtual assistants and robots on the Opus Research blog. At the recent World Economic Forum in Davos there was an issue briefing on infusing emotional intelligence into AI. It was a lively and interesting discussion. You can watch a video of the half-hour panel. I’ll summarize my key takeaways.

The panel members were three prominent academics in the field of emotional intelligence in computer technology:

  • Justine Cassell, Associate Dean, Technology, Strategy and Impact, School of Computer Science, Carnegie Mellon University, USA
  • Vanessa Evers, Professor of Human Media Interaction, University of Twente, Netherlands
  • Maja Pantic, Professor of Affective and Behavioral Computing, Imperial College London, United Kingdom

TrustMaja Pantic develops technology that enables machines to track areas of the human body that “broadcast” underlying emotions. The technology also seeks to interpret the emotions and feelings of a person based on those inputs.

Vanessa Evans has been working with Pantic on specific projects that apply a machine’s ability to understand emotion and even social context. Evans emphasizes how critical it is for machines to understand social situations in order to interact with human beings effectively.

One interesting project she sites involves an autonomous shuttle vehicle that picks up and delivers people to terminals at Schiphol Airport. They are training the shuttle to recognize family units. It wouldn’t be effective if the shuttle made room for mom and dad and then raced off leaving two screaming children behind. Evans also cites the example of the shuttle going around someone who is taking a photo instead of barging right in front of them. Awareness of social situations is critical if we’re to accept thinking machines into our lives.

Justine Cassell builds virtual humans and her goal is to construct systems that evoke empathy in humans (not to build systems that demonstrate or feel empathy themselves). This is an interesting distinction. Empathy is what makes us human, Cassell notes, and many people have a difficult time feeling empathy or interacting effectively with other people. This is especially true of individuals with autism or even those with high functioning forms of Aspergers.

In Cassell’s work, she has shown that interactions with virtual humans can help people with autism better grasp the cues of emotion that can be so elusive to them under normal conditions. She has also created virtual peers for at-risk children in an educational environment.The virtual peer gets to know the child and develop a rapport, using what Cassell calls “social scaffolding” to improve learning. For example, if a child feels marginalized for speaking a dialect different from that of the teacher, the virtual peer will speak to the child in his or her dialect, but then model how to switch to standard English when interacting with the teacher. The child is taught to stay in touch with her home culture, but also learns how to succeed in the classroom.

Another notable comment by Cassell was that she never builds virtual humans that look too realistic. Her intent is not to fool someone into believing they are interacting with a real human. People need to be aware of the limits of the virtual human, while at the same time allowing the avatar to unconsciously evoke a human response and interaction.

The panel cited other examples from research that illustrate how effective virtual assistants can be in helping humans improve their social interactions. In the future, it may be possible for our intelligent assistants to give us tips on how to interact more effectively with those around us. For example, a smart assistant might buzz us if it senses we’re being too dominant or angry. The technology isn’t quite there yet, but it could be headed in that direction.

Overall the panelists were optimistic about the direction of artificial intelligence. They also expressed optimism in our ability to ensure our future virtual and robotic companions understand us and work with us effectively. It’s not about making artificial intelligence experience human emotion, they emphasized, but about building machines that understand us better.

Why a Knowledge Graph May Power the Next Generation of Siri-Like Assistants

What’s the biggest complaint against Siri and other virtual personal assistants (VPAs)? The complaint I see the most is that Siri doesn’t always give you the answer, but instead displays links to a bunch of web pages and makes you do the work. Try asking Siri right now “Who built the Eiffel Tower?” Siri will display a Wikipedia blurb and map and say “Ok, here’s what I found.” It’s up to you to read through the Wikipedia text (which seems painfully small on my iPhone 6 Plus, but I’m old) and find out that Gustave Eiffel was the designer and engineer.

What is the Knowledge GraphNow try typing the same question into Google. At the top of the screen, you’ll see the names and photos of Gustave Eiffel and Stephen Sauvestre. Not only did Google answer the question directly, it actually told me something I didn’t know, which is that Eiffel wasn’t the only architect who designed the famous tower.

What technology underlies Google’s ability to answer my question directly? Anyone who follows the world of SEO knows the answer to the question is the Google Knowledge Graph. The Knowledge Graph is based on mountains of information about people, things, and their interrelationships that are housed in Wikidata (and formerly in Freebase, acquired by Google in July 2010).

Google’s Knowledge Graph has evolved into the Knowledge Vault and Jaron Collis does a great job at explaining some of the technology that powers it in this Quora response. Google leverages complex data mining and extraction algorithms to continually glean information from the web, disambiguate it, and load it into a structured graph where meaning and relationships are clearly defined and easy to query.

In my recent post on Opus Research called “The Knowledge Graph and Its Importance for Intelligent Assistance,” I look at why this technology is so important for the coming age of VPAs and enterprise intelligent assistants. If you’re a developer in the field of Big Data or Machine Learning, you may very well be building the infrastructure that powers the truly smart digital assistants of the future. Those would be the ones that can answer just about any question without making you read a web page.

What the Age of Chat Means for Intelligent Assistance

Text BubblesThere’s no disputing the fact that messaging platforms, specifically WeChat and Line, have become the most used interfaces on mobile devices in Asia. Because of the traction these platforms have gained, companies are building increasing levels of functionality on top of these services. The hottest trend is the addition of bots that users can message back and forth with as though they were human friends.

Text-based bots are emerging to perform all kinds of services, from hailing rideshare cars to ordering gifts, to figuring out the best deal on complex travel plans. There’s no doubt that these bots represent a new form of what we’ve been calling intelligent assistance.

Mobile users in North America and Europe aren’t using the messaging interface to the same extent as their counterparts in Asia. But if the trend continues, platforms such as Facebook Messenger and perhaps an upcoming Google competitor could become as dominant here as WeChat and Line are in China and Japan.

Many US-based brands are already rushing to get ready for the shift from apps to messaging platforms. What does this mean for intelligent assistants and technologies that companies have already invested in? For more depth on this topic, check out my latest post on Opus Research called “Why Text-Based Commerce is the Future of Intelligent Assistance.”