Jacada Granted Patent on Visual IVR Technology

Jacada announced today that they have received US Patent No.: US 8,995,646 B2, which covers some of the technology underlying the company’s Visual IVR (Interactive Voice Response) system. I wrote about Jacada’s Visual IVR technology last summer after seeing a demo at SpeechTek 2014. As I mentioned in that post, Visual IVR systems are certainly different from intelligent assistant applications.

Jacada Visual IVRVisual IVR offers the user an interface that extends the capabilities of their self-service session. For example, if the customer wants details on a shipment or a product, a live agent can send them an SMS message that lets them access a self-service interface from their mobile device.

With or without the support of the human agent, the user can enter account or product details and retrieve the information they’re seeking. Visual IVR transforms the user’s mobile device into their own personalized support application.

I”m not on expert on patents, but I reviewed the patent description to see if I could understand the basics of what is claimed. It looks to me like the patent covers the technology required to determine that the user needs to extend his/her self-service session to a human agent in order to get assistance. This technology includes a process and means for connecting the human agent with the user’s device and for transferring data about the user’s interactions into the self-service session.

The patent also seems to include a means for a real person to interact with the user and help them navigate their way through the visual interface that’s pushed to them.

As I mentioned in my earlier post, there are most likely use cases that lend themselves better to Visual IVR than to support via a dialog-based virtual agent / intelligent assistant.  As Visual IVR systems become more sophisticated, it may turn out that these handy self-service options are preferred over conversational agents. I also wouldn’t be surprised if dialog-based virtual agent systems begin to work in tandem with Visual IVR technologies. 

Jacada is certainly a major player in the Visual IVR arena and their new patent provides them with substantial intellectual property to continue to build out their offerings and customer base.

EmoSpark AI Cube Launches Quietly in the UK

Remember EmoSPARK? The artificially and emotionally intelligent cube burst onto the scene last February with a successful Indigogo campaign, raising enough cash through pre-orders to push forward with development. With very little fanfare, EmoSPARK’s PR firm announced that it would be launching the product today at a low key press event in the UK.  By the looks of a few photos on Twitter, the event was small.

Screen shot 2015-04-28 at 7.26.00 PMThe company is positioning EmoSPARK as “the first AI console to detect and respond to human emotions. You can pair the EmoSPARK cube with multiple other devices, including a smartphone app, a specially designed EmoSPARK camera, and a TV. The cube also connects to your home WiFi.

This video demos the process of pairing EmoSPARK with the smartphone app. In an update on Indiegogo posted last month, the Indiegogo team explained the personalization features built into the product. During the initial set up, or “bonding sequence,” the AI console asks the user to share personal details which are stored in the Cube’s “memory cortex.” The AI promises not to look at this information without first getting the user’s permission, not to share the data without permission, and to always have the user’s best interests in mind.

Based on an earlier brief demo of the EmoSPARK AI functions, it strikes me that the device has very similar capabilities to Amazon’s Echo. The user can ask it to retrieve recipes, play music by a specific artist, and answer questions. In the case of the EmoSPARK, though, the pairing to the TV allows the AI to display videos or other images to accompany its responses.

The goals of the EmoSPARK are certainly more ambitious than those of the Amazon Echo. With its camera and AI software, it is designed to recognize faces and facial expressions and intuit emotion. In this respect, the EmoSPARK’s intended features more closely mimic those of Jibo, the social robot that’s still under development.

How will the EmoSPARK fare upon its launch? It will be interesting to hear feedback from the early adopters. My first impression of the device and its broader ecosystem, based solely on limited videos, is that it has tons of potential. On the downside, it may be a little too chatty and it looks somewhat weird. The large eyeball with its continuously dilating pupil is somewhat unnerving (and reminiscent of HAL 9000), but I suppose you could get used to it. It may be a bit more loquacious than desired, but that’s the sort of thing that can be easily refined over time.

The EmoSPARK may come across as somewhat clumsy for now, but remember how unimpressive the video game Pong was when it came out (for those born early enough to remember back that far). Look at video games now. We’ve come a long way! These are baby steps. But EmoSPARK and all the other recently released voice-enabled intelligent assistant devices prove that the future of personal AI is just around the corner.

Mobile Voice Conference 2015 Wrap Up

This year’s Mobile Voice Conference brought together over a hundred industry experts in the fields of speech recognition, natural language processing, cognitive computing, and voice-enabled mobile app development. As you might imagine, the atmosphere was lively and the general mood upbeat. How could it not be? Speech technologies are in the limelight these days and there’s really no more exciting place to be.

Mobile Voice ConferenceThe two days were chock full of interesting presentations, Here are just a few highlights from my notes.

Tim Tuttle of ExpectLabs demonstrated their powerful MindMeld voice-driven search technology. He stated that in three years, most computers won’t have keyboards. We’ll be using natural language as the interface of choice. The MindMeld technology is available to companies to create their own voice-driven content discovery.

Lisa Falkson of CloudCar showed technologies that allow us to use voice commands in our cars to facilitate navigation, communication, and entertainment. They’re working together with ExpectLabs to enable advanced voice-driven search from right within your vehicle.

Christina Apatow of Speaktoit’s api.ai demonstrated how the backend system that powers the highly-rated Speaktoit Assistant is now available to developers. She showed the graphical development platform that allows you to easily build your own voice-enabled applications and run them using the Speaktoit API.

Sara Basson from the IBM Watson team talked about how the cognitive computing technology is being used to enhance learning. She gave some insights into the Watson Cognitive Tutor that engages students to improve their learning experience.

Peter Cahill of Voysis delved into the challenges of making text-to-speech sound more human, especially when reading a story. He gave a glimpse of some exciting new technology that his company is pioneering that may facilitate breakthroughs in this area.

There was an excellent session on virtual assistants in the health care space. Jen Snell from NextIT showed some of their early technology in this area. She cited a study that shows many patients seem to prefer interacting with a virtual assistant upon discharge from a hospital following a procedure and that readmissions or complications can be avoided if the patient has been instructed by a virtual assistant.

Laura Kusumoto from Kaiser Permanente talked about some of their emerging technology that includes virtual health assistants and health coaches.

Jonathan Dreyer from Nuance demonstrated their Florence virtual health care assistant. This is still an early technology, but they’ve been working closely with doctors to improve the technology and Dreyer even demoed a voice-enabled wearable physician’s virtual assistant.

There were sessions on the evolution of personal assistants and other sessions focusing on intelligent assistants in the enterprise. I gave a short talk on a framework that can be used for evaluating enterprise virtual assistants. The framework might be useful if you’re a company in the market for customer-facing virtual agent technologies, but you’re not quite sure how to determine your needs.

There were far more sessions than I’ve listed here, and of course lots of great discussions with industry experts. If you missed the event, or if you were there and would like a copy of the presentations, stay tuned to the Mobile Voice Conference website. Rumor has it that they may be posting the presentations soon.

Contextual Computing and Intelligent Assistants

How many times has this happened to you? You’re sitting in an important meeting with your boss or a client, when all of a sudden your smartphone rings. One moment everything’s quiet and serious, the next moment some embarrassing ringtone is blaring out from your pocket or purse.

Context AwareCellular News posted an article this week on the topic of contextual computing, or contextual sensing in smartphones, to be more precise. It turns out that this technology might rescue you from these horrifying moments when you forget to silence your phone. So what is contextual computing and how can it help?

Based on the Cellular News article, contextual computing is the ability for a device to interpret data gathered from a sensor hub. Intel created a sensor hub in 2012 and since that time, data-gathering sensor hubs have been incorporated into different chips. This technology could soon lead to personal intelligent assistants that pick up on sensory data, such as ambient sounds and location information, to determine that you’re in a meeting. When a call or text comes in, the assistant will know that it should alert you via a vibration, instead of by blaring your annoying ring tone.

This is just a mundane example of how intelligent assistants can leverage contextual computing to serve us better. The article points out that Intel has created a contextual sensing software development kit (SDK). Developers can use the SDK to write programs that pull data from one or more sensor hub chips, helping to paint a full picture of where the user is and what they’re doing. These programs can also incorporate learning algorithms so that assistants can learn to predict behaviors.

If we’re driving a certain route and listening to a specific radio station, for example, the assistant can learn that we’re on our way to work and it could remind us of an early meeting. Under other circumstances, it could extrapolate that we’re out for a run and it could play our favorite exercise music and send calls straight to voicemail. The possibilities are limitless.

As always, the question about privacy can’t be ignored. Users need the choice to opt in or out of data gathering and the ability to turn off collection during specific times. But the benefits of an intelligent personal assistant that understands what we’re doing are tremendously compelling.

Software Advice Publishes Customer Service Software Report

Software Advice, an online consultancy for customer relationship management software, has published a report looking back over the past decade and a half at trends influencing Customer Relationship Management (CRM) software. The report specifically focuses on Customer Service tools and the small and midsize business market. 30 vendors that have been offering help desk, customer service, and/or service desk software to SMBs between the years 2000 and 2015 were included in the study.

Micro-momentThe report has some interesting key takeaways. It turns out that the number of CRM software vendors offering self-service options and tools has grown dramatically over the last seven years. Back in 2005, only 25% of CRM vendors had support for self service channels. As of 2015, 86% of CRM vendors offered self-service support tools.

In a report that Software Advice released previously, they drilled into the most popular self-service channels. That earlier report showed that out of the leading six self-service channels, with FAQs ranking at the top, virtual agents came in last. However, Craig Borowski, a market researcher for Software Advice, sent me commentary on the newly published report in which he noted that virtual agents are likely to play an increasingly important role in the future.

Two other key takeaways from the report show that technologies and customer trends are likely paving the way to the growth of virtual agents that Borowski expects. These takeaways point to the central role played by both social media and mobile devices in servicing customers.

Half of CRM vendors now offer specialized tools to enable SMBs to manage their social media presence and the customer support scenarios that are initiated there. An example of such tools is an integration capability that enables a help desk agent to create a support ticket directly from a customer’s tweet.

Customer’s growing reliance on mobile devices is also confirmed in the report. In 2005, just half of all help desk vendors offered mobile capabilities. In 2015, 100% of the software vendors included in the study supported mobile channels. Examples of mobile CRM support include the ability for customers to access live chat from their smartphones and specialized customer support mobile apps.

Why do these trends bode well for virtual agents? Virtual agents can be integrated with speech technologies to provide customers with a simple, intuitive way to ask questions. Speech-enabled virtual agents lend themselves very well to mobile devices. Machine learning algorithms that power intelligent virtual agents enable them to spot trends and topics on social media far more quickly and comprehensively than humans can.

The growing popularity of customer self-service, social media, and mobile are laying the groundwork for significant growth in virtual agents. The Software Advice report on developments in customer service software offers valuable insights into self-service trends over the past decade and a half and exposes data that points to the trends of the future.

 

Intelligent Assistants as Intervention Against Depression

Less than two weeks ago the co-pilot of the Germanwings Airbus intentionally steered the commercial airliner into a mountain, killing all on board. Tha plane’s automated flight control systems operated flawlessly. A mentally unstable human being, however, was able to easily override these systems and make the plane do something it was never intended to do.

DepressionIn the future, will aeronautical autopilot systems be intelligent enough to know when not to give control to a human pilot? Our technology isn’t at a point where it can protect us from every bad or purposely evil human decision. But what if intelligent assistants could at least help people suffering from depression to get through their day more easily? This question is apparently being explored by researchers at Northwestern University.

A clinical trial began last month to test the efficacy of smartphones to help people with depression and anxiety. The program is referred to as IntelliCare and leverages smartphones to target personalized treatment material to participants. The participants receive messaging and also provide input back into the app. The apps help the patients by teaching them skills to assist with managing their mood. According to the documentation describing the clinical trial, the IntelliCare system also applies machine learning techniques to develop algorithms that will better tailor the app to the individual participant, based on collected data.

Principal Investigator David C Mohr of the Feinberg School of Medicine at Northwestern has done previous studies on the effects of specially-designed mobile apps for those suffering from psychological disorders. One such study sought to measure the results of mobile app intervention on patients with schizophrenia.

The study found that 90% of patients rated the app as beneficial and that after just one month of using the app, significant reductions in psychotic symptoms were noted.

Can tools such as these be integrated into healthcare intelligent assistants of the future? Will we be able to rely on our smartphones, or other connected devices, to help us tame our inner demons and learn techniques to manage our moods? The technology definitely seems worth exploring.

 

Siri Co-Founder Dag Kittlaus Talks About Viv Technology

Dag Kittlaus, CEO and co-founder of Viv Labs, gave an interview recently in which he shared thoughts on the future of intelligent personal assistants. He also talked about the next generation intelligent assistant currently under development at Viv Labs. You can read my guest blog post on Opus Research to get a summary of the interview.

Neural NetworksI found Kittlaus’s reference to his company’s patent on “A Cognitive Architecture and Marketplace for Dynamically Evolving Systems” particularly interesting. The gist of the patented idea seems to be that an intelligent assistant must have the ability to satisfy needs that it can’t predict in advance.  In order to do this, the underlying technology has to be able to generate programs on the fly.

These programs would coordinate the execution of tasks to retrieve information, perform a transaction, or do whatever the user has asked the intelligent assistant to do. In many cases, fulfilling the user’s intent will entail finding and executing services provided by third-parties. But the assistant will have the ability to do this in a seamless way.

The patent documentation contains many more interesting details and is worth a look for those interested in intelligent personal assistant technologies.