Macy’s On Call Offers Location-Based Intelligent Assistance

Macy's On CallMacy’s is experimenting with a new form of location-aware intelligent assistance. Leveraging technology from Satisfi and IBM Watson, Macy’s is rolling out what it calls “Macy’s On Call” in a select number of stores. The service acts like an intelligent assistant that understands the customer’s current location without having to ask.

Customers shopping in a Macy’s store that supports the On Call service can enter questions in natural language in a mobile web user interface. For example, a customer searching for a new toaster oven might ask where the kitchen appliances are located and the On Call automated assistant will direct the customer to right spot.

The new service combines two technologies: Satisfi offers a platform that can ascertain a user’s location from their smartphone data and that can respond to the user’s natural language requests. IBM Watson provides a cognitive computing platform that understands natural language inquiries and searches through complex knowledge sources to find information with the highest probability of answering a specific question.

The combination of location-based awareness, natural language understanding, and the ability to find answers about products, product locations within specific stores, inventory, pricing, etc. enables Macy’s to offer an innovative and powerful new type of intelligent assistance to its shoppers.

During the recent MobileBeat 2016 event there was lots of discussion about engaging with customers while they’re inside the store. Nichele Lindstrom, director of digital with Whole Foods, noted in a presentation that over 50% of online recipe searches happen in the grocery store aisle. Whole Foods decided to launch a Facebook Messenger chatbot to help its shoppers with recipes and other questions.

Macy’s On Call is an example of another natural language-based self-service offering that helps customers when and where they need it most: onsite at a retail location in the direct path to purchase. Now that the technology supports this type of assistance, we’re likely to see more brands extend the reach of self-service to follow customers wherever they go.

This post was originally published at Opus Research.

SnapTravel Launches Emma, a Concierge Bot for Hotel Bookings

As Opus Research delves into the bifurcated domain of Intelligent Assistance, it has become increasingly evident that today’s bot developers and customer care professionals live in parallel universes.

In a post issued in 2011, when coining the term “Conversational Commerce”, Dan Miller, lead analyst and founder with Opus Research, anticipated “the advent of true self-service.” Noting that the combination of smartphones, “The Cloud,” speech recognition with natural language understanding, and recognizing that our spoken words are assets, he saw “the foundation for smartphone-based services that are highly responsive to individual end-users.”

In other words, the drudgery associated with searching for products and services, selecting a preferred vendor, seeking help and advice and ultimately committing to make a transaction could be carried out under the individual’s control, through a conversational interface.

By contrast, in a recent Medium post, Alex Bunardzic, a long-time high-tech developer, made the somewhat counterintuitive statement that bots are “abolishing the self-service model.” To understand Bunardzic hypothesis you first need to accept his view that self-service in the current world of apps and websites entails laying a lot of drudgery onto the user.

Planning a vacation these days takes a lot of work. Sure, we can browse hotels from the comfort of our own sofa. We can spend hours reaping insights from the reviews of fellow travelers. We can compare prices for rooms across numerous websites and search for the best deal. But think of how much work all this is in comparison to the old days (for those of us senior enough to even remember them) when we visited our trusted travel agent once in her office and waited for her to call us back a week later with our entire tripped planned out.

For Bunardzic, bots hold the promise of bringing us back to those good ol’ days when we could get expert, one-on-one advice from a trusted advisor. And now that trusted advisor lives in our pocket within the messaging app of our smartphone.

SnapTravel’s Emma: Aiming to be a Full-Service Concierge
Hussein Fazal, CEO of SnapTravel, believes in this model of bots as trusted personal advisors. The SnapTravel bot, launched today for the Facebook Messenger, SMS, and Slack, tries to mimic the same level of service that travelers used to receive from human travel agents. SnapTravel calls its bot Emma the Concierge.

I tried out Emma on Facebook Messenger. As Fazal explained, the service is currently a hybrid of automated algorithms and human input. I told the bot the city and dates of my travel and also specified a nightly budget. The goal of SnapTravel is to find the traveler a hotel that represents the best value. Best value doesn’t necessarily mean the least expensive option. The bot also takes into account factors such as user ratings, number of stars, and the property’s location score.

SnapTravelSnapTravelSnapTravelSnapTravel

It didn’t take long for Emma to get back to me with a recommendation for a remarkably reasonably priced hotel near my desired destination. Assuming that the bot really has checked for all the best prices, factored in all user reviews, and taken location into account, the bot has probably saved me several hours worth of research. Now the trick is convincing me, the user, that the bot really is recommending the best value and that I can trust it.

For Fazal, establishing trust with the user is the key to success for a personal advisor bot such as SnapTravel. Presumably it will take time and at least a few uses to build this trust. But if I learn that I can rely on SnapTravel to ferret out the best value hotel for me no matter when or where I want to go, I’m likely to become a repeat user of the service.

Doing a very quick comparison of the SnapTravel experience to that of using the Expedia bot, SnapTravel’s Emma seems to strive to be more of a full-service advisor. The Expedia bot offers a quicker way to get a short list of recommended properties for a city and check in date. It’s not clear, though, that the Expedia bot is showing hotels that are the best value, based on my criteria.

Emma not only takes your desired budget into consideration, but according to Fazal, the bot can learn about your preferences over time. It may learn, for example, that you prefer staying at 4-star hotels over saving a few bucks to stay at 3-star hotels. Based on this knowledge, it can tailor future recommendations to ensure that it presents you with 4-star options, even if they’re outside your target budget. The bot includes natural language processing that enables it to recognize not only city names, but terms such as “romantic” or “close to the beach.”

Are bots the end of self-service? Personal advisor bots may be the end of time-consuming research into all the best options for travel, hotels, restaurants, and so on. If we still think of bots as facilitating self-service, then it seems that self-service may be about to get a lot easier.

This article was originally posted at Opus Research.

Chatbots as Entertainment

Seems everybody is talking about bots and how they can be effectively deployed to assist people in all sorts of tasks. Tomasz Tunguz of Redpoint recently posted on his blog about the four basic use cases for the current generation of chatbots. Tunguz sees these four use cases as:

  • Alerts
  • Search/Input
  • Support
  • Booking

Humani JessieA couple of months ago I tried out a completely different type of chatbot. This chatbot is so “way out there” that it doesn’t fall into one of Tunguz’s four use case categories. To understand the purpose of Pullstring’s Humani Jessie chatbot, you’d have to create a fifth, hybrid category that could be called something like “entertainment marketing.”

As soon as you engage with Humani’s Jessie chatbot on Facebook Messenger, you’re plunged into a sleazy interactive romance novel. Jessie is a messed-up millennial in search of a job, new living accommodations, and a her next hot date. She texts with you as if you’re her closest confidante and seeks your counsel on all sorts of crazy decisions.

When she’s on a date with a new guy, she checks in with you to give you the details and often asks for your input.

 

Jessie 1Jessie 2

Jessie disappears and then reappears with a text when you’re least expecting it, just like a real, flaky friend might. Though the Jessie / Humani isn’t marketing anything other than itself at this point, the overall conversational experience is engaging and could be a platform for brand marketing.

Though the whole Jessie experience was definitely silly, I found chatting with the automated bot to be at least mildly engaging. I interacted with Jessie for a few minutes over an entire week and played the whole adventure out until its abrupt conclusion.

Entertainment bots are here and they exist on most messaging platforms. Arterra is a “choose your own adventure” style sci-fi experience on Kik. In a conversational interface, the Arterra adventure takes you on an action-packed space thriller.

What’s the future for entertainment bots? Is there a way to monetize them? It may take some time to find out.  In a recent article, Scott Rosenberg investigated various use cases for bots and quoted the team from Kik as saying the present moment for bots is akin to the web’s “Netscape 1.0, blink-tag phase.” In other words, we’re really early on in the evolution of bots.

If user engagement is any measure of success, bots that offer truly entertaining conversational experiences may eventually have staying power. How they’ll be used and what they may evolve into remains to be seen.

 

Connected, Talking Toys: Striving to Exceed Expectations

ToysChildren have always imagined dolls and other toys that could talk to them like real people. The new generation of WiFi-enabled web-connected toys has the technology to get close to realizing that dream. The implications for both entertainment and education are huge.

Connected Toys Have the Technology to Hear, Understand, and Speak

The most advanced connected toys can be equipped with automatic speech recognition (ASR), natural language understanding (NLU), dialogue management, and text-to-speech (TTS) abilities. These toys can hear and understand what a child says, determine an appropriate response, and speak using a synthesized voice.

Although Amazon’s Echo product isn’t in the toy category, the Echo and its built-in voice assistant Alexa are getting many parents and their children comfortable talking to an interactive smart device. Amazon now even offers the technology powering the Echo–the Alexa Voice Service (AVS)–to third parties to embed into their own devices.

We’ve come a long way since the days of pullstring toys and Furbie. But in spite of all the technology, there are still challenges involved in realizing every child’s dream and meeting the expectations of parents. In this post I’ll address two of these challenges and potential solutions

Challenge 1: Mimicking Real-Life Conversations

Child on BeachFirst and foremost is the challenge of making connected toys fun and engaging for children. To be engaging, the toy needs to mimic real-life conversation. Unfortunately, creating a toy that can carry on completely lifelike conversations, including turn taking, remembering context, and inventing responses on the fly, is not currently possible.

Toymakers can’t anticipate what a child might want to talk about, so toys can’t be pre-programmed to respond to every imaginative remark or suggestion. To get around this constraint, toys such as Mattel’s Hello Barbie take the lead in the conversation. Hello Barbie suggests topics to discuss and games to play.

Potential Solution

Even with the doll setting the direction of the conversation, a child can still be encouraged to think and imagine. The toy can suggest topics and imaginary adventures and prompt the child for input. If developed in a skillful way, these toy-directed dialogues can be very engaging for the child, especially if the toy can understand and respond to the child’s input.

Parents will see that their child is being encouraged to imagine and even think about new things. Children will feel empowered and will enjoy getting positive reinforcement from their conversational partner.

Pullstring (formerly ToyTalk)–the company behind Hello Barbie’s dialogue–offers a platform to support any company in building conversational dialogue. There are other existing services that toymakers can leverage to build dialogue if they don’t want to create and maintain their own proprietary platform.

Challenge 2: Keeping the Content Pipeline Filled

Creating hours or even days-worth of original engaging content is time consuming, to say the least. Everything a talking toy says, all of its dialogue, every story it tells, every game it directs the child to play, must be scripted. Content is what makes a talking toy interesting and valuable. A toy that only has a few things to say will quickly be cast aside.

Since today’s talking toys are connected to the cloud, they can access databases full of content. There’s still a question of where the content will come from and whether or not parents will feel comfortable trusting it.

Potential Solution

As more talking toys enter the market, we can assume that more people will be motivated to create content for these devices. At Hutch.ai we create interactive stories and other engaging content designed for conversational devices, including talking toys. Our goal is to widen the funnel and collect, curate, and distribute content from many diverse content contributors.

We’re just at the start of the journey to create engaging talking toys that meet all the expectations of both parents and children. But the journey will no doubt be an interesting and rewarding one.

How Do Talking Toys Measure Up So Far?

I recently gave a presentation on talking toys at SpeechTek 2016, a technology conference.  I had a Hello Barbie with me and gave a demonstration of it after connecting to a personal MiFi device. Following the demo, several audience members said they were pleasantly surprised at the positive conversational experience a child could have with the talking doll. In fact, one person admitted that prior to the demo, they had a negative impression of the product just based on press coverage.

While talking toys certainly aren’t perfect, the best connected toys can already offer children a rich and engaging experience. As those of us in the industry continue to find solutions to the inherent challenges, web-connected toys will continue to improve and exceed expectations.

 

This article was originally posted at Hutch.ai.

Social Robots: New Publication on Medium

Last week I was at SpeechTek 2016 and I participated on a panel about the social impacts of conversing robots. It was a great experience and the panel started an interesting discussion that still continues.

Social RobotsI was inspired to start a publication on the topic of Social Robots on Medium.

Why Medium? Medium seems to be getting a lot of activity these days and some of my favorite stories are posted there, like the articles written as part of the Chatbots Magazine.

I’ve very interested in what I’ll call the “chatbot movement” and I write a lot about conversational UIs and conversational commerce at Opus Research. But I’m even more interested in voice interactive devices and the opportunities they present.

The first article I’ve posted in the Social Robots publication is called Shooting the Breeze About Social Robots.  I try to share some of the highlights from the SpeechTek keynote panel discussion, which included Peter Krogh of Jibo and Leor Grebler of UCIC.

Please have a look at the article and, if you feel so inclined, share it. I’d like to get others to contribute to the publication, so if you’re interested in the social robots topic, reach out to me via your favorite channel and let’s talk!

 

 

 

SpeechTek 2016 Offers Opportunities to Talk about Connected Toys and Conversational Robots

Next week at SpeechTek 2016 I’ll be joining others to discuss the exciting topic of connected, conversational devices.

I’m looking forward to participating on a keynote panel next Tuesday, May 24th on the topic of “Social Impact of Conversing Robots.” Peter Krogh of Jibo will moderate the panel and I’ll be joined by Leor Grebler of UCIC and Bruce Balentine of Enterprise Integration Group. Some of the areas we may cover include what people might want to talk to robots about, where the content told by robots will come from, how much robots will know about us and how that might drive the conversation, and what technological advances are needed to make robots better conversational partners.

Conversational ToysOn Wednesday, May 25th, I’ll be giving a presentation entitled “Talking Toys: Technology and Outlook.” There is a lot  going on in this field right now. Many toy makers and startups are experimenting with connected devices to both explore what these devices can offer consumers and to test out the market.

Internet-connected dolls that talk are still controversial. But there’s little doubt that conversational toys will be part of the future. There have been and will continue to be growing pains around security, privacy, and even conversational content for talking toys. We’re still in the early days of defining standards and understanding how regulations like COPPA factor into the development of safe and engaging connected devices designed for children. Even devices like the Amazon Echo, which aren’t considered “toys,” offer a glimpse into what types entertainment and educational content are possible with voice interfaces.

I’ll explore both the challenges and opportunities of talking toys in my presentation and I’m hoping for a lively discussion. If time permits, we’ll have some connected devices available to demo. We can also demo the work we’ve been up to at Hutch.ai, where we’re building a marketplace of content designed for conversational toys and devices.

Update to Opus Research’s Intelligent Assistance Landscape

Last week the team at Opus Research published an update to the Intelligent Assistance Landscape. This update represents the first major revision since the landscape was first published in partnership with VentureBeat last fall.

This new version includes updates to the industry players that populate various categories across the landscape. Opus has also refined the categories themselves. If you haven’t seen the landscape or had a chance to delve into it, here’s a quick synopsis.

Intelligent Assistance Landscape

Click to open a full view of the landscape

The top half of the diagram identifies core technologies that enable intelligent assistance. Opus distinguishes two main groups of enabling technologies.

Conversational technologies underpin the natural language exchange between humans and machines. Speech I/O services facilitate the understanding of spoken words and enable machines to talk. Text I/O services support natural language input and understanding via text. This category can also include dialog management services and chatbots. Avatars provide embodiment for intelligent agents, while emotion and sentiment analysis enable software to interpret and act upon knowledge of human emotions and context.

Intelligent Assistance technologies are the powerful core services that help machines understand meaning and intent and learn how to serve us better. These technologies include Speech Analytics, Natural Language Processing, Machine Learning, Semantic Search and Knowledge Management.

The bottom half of the Intelligent Assistance Landscape provides a taxonomy for the various types of smart assistants. While the terminology used for these services is fluid, Opus Research has put a stake in the ground by establishing specific criteria for each category.

Opus defines Mobile and Personal Assistants as smart agents that understand us and whose primary purpose is to help us control the smart objects around us. Assistants such as Siri and Google Now, for example, activate functions on our mobile phones, Amazon’s Alexa controls objects in our smart home, and assistants in cars control the features of our connected vehicle.

Personal Advisors focus on helping us manage complex tasks. These assistants tend to be more specialized and they are generally product agnostic. For example, a specialized personal travel advisor can assist with planning and booking trips and they suggest products and services from a wide array of providers.

Virtual Agents and Customer Assistants are customer-facing, self-service assistants. These assistants represent one company or brand. Their knowledge of the company’s products and services is typically fairly broad and they focus on providing information that customers ask most frequently.

Employee Assistants help people do their jobs within an enterprise. These assistants are generally integrated with the enterprise software applications that employees rely on most and they can also aggregate information to make it more readily available.

The domain of intelligent assistants is gaining increasing attention. The update to Opus Research’s Intelligent Assistance Landscape adds some insightful clarity around this complex topic.