I first wrote about Expect Labs over a year and a half ago, when their MindMeld speech recognition and natural language processing technology drove an innovative social listening app tied into Facebook. Since that time, Expect Labs has pivoted their offering into a voice-focused Software as a Service.
MindMeld now provides app owners and developers with what Expect Labs calls an intelligent voice interface. MindMeld uses semantic mapping technology to create a knowledge graph of the application’s content. It then uses the graph to improve the accuracy of its NLP engine in understanding precisely what users are asking when they talk to a voice-enabled interface.
A recent article in Macworld reported on Fetch’s use of the MindMeld service to power their mobile concierge app. Fetch makes it easier for you to buy the things you need by easily connecting you to specialists and personal shoppers. Users can tell Fetch what they need, from a plane ticket to an order of flowers and chocolate for a significant other, and the app will set the wheels in motion to have a specialist fill the request quickly and painlessly.
Now that Fetch has partnered with MindMeld, they’ve been able to create a voice-enabled app that’s optimized for the Apple Watch. Fetch users can use voice commands for on-demand concierge services right from the Watch.
The MacWorld articles cites Expect Labs data showing that people spend 60% of their online time on mobile devices. In contrast, only 10% of purchases are made from mobile devices. In the article, Tim Tuttle, CEO and founder of Expect Labs, voices the opinion that this discrepancy could be accounted for in the current complexity of carrying out purchases from the mobile device form factor.
Intelligent voice-enabled interfaces, like those made possible by MindMeld, are aiming to simplify our interactions with mobile devices. If Fetch’s MindMeld-powered Apple Watch app is any indication, voice interfaces will transform our wearables and smart phones into the personal assistants we’ve always dreamed of. The age of voice is here, and Expect Labs is well-positioned to fuel the positive transition to intelligent natural language interfaces. To see more examples of MindMeld’s technology in action, check out the Expect Labs’ demo page.
A few weeks ago, Darrell Etherington published a piece on Techcrunch about his experiences using the Apple Watch at Techcrunch Disrupt NY 2015. I haven’t placed my order for the Apple Watch yet, but when I talk about my intent to do so, most people ask why. There’s still a lot of questions out there about what the Watch can do.
Etherington gives a strong plug for the Apple Watch as an assistant that helps you stay focused on attention-grabbing work tasks, while alerting you to incoming work requests that can’t be missed. At the same time, it helps you stay connected to loved ones, even in the midst of your hectic schedule.
Etherington likes the Watch’s notifications, because they break through all the noise to get his attention. He can respond to the notifications quickly, such as by initiating a “Like” for a comment on Convo, an enterprise social networking app used by his co-workers. The like let’s the sender know that he’s seen the request and acknowledges it.
Even in the midst of the Disrupt event chaos, Apple Watch helps Etherington stay in touch with loved ones by allowing him to quickly send sketches, taps and heartbeats. These communication forms are more personalized and even quicker than stopping to exchange text messages.
The Apple Watch seems to be the perfect platform for those micro-moments that Jeffrey Hammond of Forrester Research writes about. Micro-moments are unprompted alerts or nudges from mobile apps that provide useful information and that are quick for the recipient to interact with or dismiss.
Micro-moments, comprised of helpful notifications that assist us through the challenges of our busy lives, are likely to become valuable features of our wearables. As products like the Apple Watch become increasingly adept at keeping us on track, the distinction between typical wearable apps and what we think of today as personal intelligent assistants may start to blur.
Dan Miller of Opus Research wrote a post earlier this month about how the Apple Watch is a perfect match for Siri. Miller points out that Siri has some apparent limitations on the Apple Watch, being that she doesn’t speak back to the wearer, but just listens and carries out spoken commands.
Yet Miller views the Apple Watch as the perfect extension for speech-enabled use cases that the typical iPhone user is already accustomed to and comfortable with. iPhone users have come to depend on Siri’s reliability for controlling the clock, setting alarms, leaving reminders, or making calendar entries. Miller thinks these types of commands are closely integrated with watch functionality and that buyers of the Apple Watch will naturally use Siri to perform these operations.
Miller also points out that Siri’s range of capabilities is extensible. Siri is standing by to help execute a large number of apps and the inventory of apps with Siri integration will steadily increase.
In a post I wrote last September, I noted that the form factor of the Apple Watch might actually lead to a renewed interest in using Siri, even among the large number of iPhone users who rarely interact with Apple’s digital personal assistant (now that the novelty has long since worn off). When I saw the demo of the Apple Watch following the #SpringForward reveal, the concerns about the form factor resurfaced.
There are so many tiny app icons loaded onto the watch face that it’s difficult to tap on the app you really want to open. Wouldn’t it be a ton easier to just say “Hey Siri, open Uber” or “Hey Siri, open Facebook?” The constraints of the wearables form factor may provide a renewed raison d’etre for voice interfaces in general, and intelligent assistants in particular.
I agree with Dan Miller that Apple Watch and Siri are a natural pair. It’ll be interesting to observe how this next generation of wearables impacts the intelligent personal assistant market and whether wearables force voice interfaces to the forefront.
When Apple finally unveiled the long anticipated Apple Watch this week, Siri came along for the ride. With the general disillusionment over Apple’s intelligent assistant, it wasn’t a foregone conclusion that the company would include Siri in their first flagship wearable product. But Siri is baked into the watch and it appears “she’ll” be able to do pretty much what she’s been able to do from the iPhone platform since her appearance with the 4s.
Could the Apple Watch give new life to Siri? I’m thinking it might just do that. We’re all so adept now at typing on our smartphone keyboards and browsing, reading, and generally interacting with our touch screens that, truth be told, we rarely need to use our voices to ask Siri (or Google Now, or Cortana) for help.
But what will happen when we’re wearing a watch? What if we’re so comfortable with the watch that we leave our phone in our purse or in our briefcase, and what if we want to send a text to a friend? If Siri’s reliable enough to listen to us dictate our message and shoot it off to our friend, are we going to bypass that option to stop, dig out our phone, and type out a message? Maybe. Old habits die hard. But if Siri can do it for us, I’m thinking we might rely on her more and more.
Once we start relying on Siri to write and send our messages from our watch, we may begin to ask her to update our calendar, or reserve a dinner table, or buy flowers for a friend, or recommend a good movie, or tell us how to rebook a flight if the one we’re waiting on just got canceled (once she’s able to do such things). Heck, she’s right there and it’s a lot easier than fooling with our phone.
There used to be lots of talk about killer apps. Are wearables the killer app that intelligent assistants have been waiting on? Wait; that analogy doesn’t really make sense. What I mean to say is: Are wearables the killer platform that will bring intelligent assistants into the mainstream? I guess we won’t find out until sometime in 2015.