GetAbby Uses Human Avatar for Increased Customer Engagement

How important is it for a customer­-facing intelligent virtual assistant to look like a real person? There has been an abundance of academic research on the use of embodiment in virtual agents. Back in 2013, I wrote about one such study by Ann L. Baylor from Florida State University. Baylor’s experiment indicated that human-like talking avatars made it easier for young people to relate to the avatar, especially when the avatar displayed physical characteristics that were similar to the interlocutor’s age, gender, and culture.

Get Abby AvatarGetAbby is a provider of innovative intelligent virtual assistant technologies that leverages conversational relationships and the human identity. GetAbby’s technology is unique among intelligent assistant vendors. While their solution leverages speech recognition and natural language processing capabilities like other vendors, GetAbby’s avatar is an actual human, encoded with their customer’s message in mind.

The technology driving the GetAbby human avatar is called True Image and it enables clients to use any person of their choosing as the avatar. True Image transforms video recordings of the person into an avatar with real human characteristics and expressivity. GetAbby’s avatar can actually understand, educate, coach, and remind users.

Why does GetAbby use human avatars? GetAbby focuses on what they term the psychology of engagement. In their view, to be truly engaging a virtual agent must win the trust of the human with whom it’s interacting.

Wayne Scholar, CTO at GetAbby, gave a presentation at the Mobile Voice Conference 2015. In the presentation, Scholar listed three key characteristics a virtual agent requires to build trust: Ability, Benevolence, and Integrity. GetAbby has found that human avatars have an advantage over any other type of animated character in exhibiting all these fundamental human characteristics and in building trust, because humans are inherently influenced by our natural urge to communicate with other humans.

As Scholar noted in his presentation, human avatars also skirt the danger of the uncanny valley. Animated characters or cartoon figures that too closely resemble humans, but that aren’t quite human, can be repelling to people and destroy any sense of engagement.

GetAbby analytics verify that customers have a high level of engagement with the human avatar. 70% follow the avatar to make an online purchase. Over 85% say they find the avatar to be trustworthy and would interact with it again. The solution provides for a broad range of data analytics, including: usage trends, unanswered question logs, and information on context patterns.

The GetAbby human avatar technology seems to be well-suited for a variety of use cases. GetAbby’s solutions platform focuses on call center, candidate selection & assessment, financial, government, healthcare, and retail industries. You can visit the GetAbby website to see a demo of the human avatar technology.

Expect Labs’ MindMeld Powers Voice-Enabled Apple Watch App

I first wrote about Expect Labs over a year and a half ago, when their MindMeld speech recognition and natural language processing technology drove an innovative social listening app tied into Facebook. Since that time, Expect Labs has pivoted their offering into a voice-focused Software as a Service.

Fetch Apple Watch AppMindMeld now provides app owners and developers with what Expect Labs calls an intelligent voice interface. MindMeld uses semantic mapping technology to create a knowledge graph of the application’s content. It then uses the graph to improve the accuracy of its NLP engine in understanding precisely what users are asking when they talk to a voice-enabled interface.

A recent article in Macworld reported on Fetch’s use of the MindMeld service to power their mobile concierge app. Fetch makes it easier for you to buy the things you need by easily connecting you to specialists and personal shoppers. Users can tell Fetch what they need, from a plane ticket to an order of flowers and chocolate for a significant other, and the app will set the wheels in motion to have a specialist fill the request quickly and painlessly.

Now that Fetch has partnered with MindMeld, they’ve been able to create a voice-enabled app that’s optimized for the Apple Watch. Fetch users can use voice commands for on-demand concierge services right from the Watch.

The MacWorld articles cites Expect Labs data showing that people spend 60% of their online time on mobile devices. In contrast, only 10% of purchases are made from mobile devices. In the article, Tim Tuttle, CEO and founder of Expect Labs, voices the opinion that this discrepancy could be accounted for in the current complexity of carrying out purchases from the mobile device form factor.

Intelligent voice-enabled interfaces, like those made possible by MindMeld, are aiming to simplify our interactions with mobile devices. If Fetch’s MindMeld-powered Apple Watch app is any indication, voice interfaces will transform our wearables and smart phones into the personal assistants we’ve always dreamed of. The age of voice is here, and Expect Labs is well-positioned to fuel the positive transition to intelligent natural language interfaces. To see more examples of MindMeld’s technology in action, check out the Expect Labs’ demo page.

 

Apple Watch as a Capable Event Assistant

A few weeks ago, Darrell Etherington published a piece on Techcrunch about his experiences using the Apple Watch at Techcrunch Disrupt NY 2015. I haven’t placed my order for the Apple Watch yet, but when I talk about my intent to do so, most people ask why. There’s still a lot of questions out there about what the Watch can do.

Apple WatchEtherington gives a strong plug for the Apple Watch as an assistant that helps you stay focused on attention-grabbing work tasks, while alerting you to incoming work requests that can’t be missed. At the same time, it helps you stay connected to loved ones, even in the midst of your hectic schedule.

Etherington likes the Watch’s notifications, because they break through all the noise to get his attention. He can respond to the notifications quickly, such as by initiating a “Like” for a comment on Convo, an enterprise social networking app used by his co-workers. The like let’s the sender know that he’s seen the request and acknowledges it.

Even in the midst of the Disrupt event chaos, Apple Watch helps Etherington stay in touch with loved ones by allowing him to quickly send sketches, taps and heartbeats. These communication forms are more personalized and even quicker than stopping to exchange text messages.

The Apple Watch seems to be the perfect platform for those micro-moments that Jeffrey Hammond of Forrester Research writes about. Micro-moments are unprompted alerts or nudges from mobile apps that provide useful information and that are quick for the recipient to interact with or dismiss.

Micro-moments, comprised of helpful notifications that assist us through the challenges of our busy lives, are likely to become valuable features of our wearables. As products like the Apple Watch become increasingly adept at keeping us on track, the distinction between typical wearable apps and what we think of today as personal intelligent assistants may start to blur.

Tables as Intelligent Assistants of the Future?

IKEA has released a website showcasing a vision of the kitchen of 2025. IKEA collaborated with IDEO and design students from Swedish universities to come up with the imaginative concepts. Central to their ultrasleek and eco-conscious culinary environment of the future is a smart table. Though the table doesn’t talk in the video, it uses language to communicate with the family it serves.

Kitchen Table 2025The smart table can sense the objects on it and communicate by projecting words onto its surface. For example, if the chef of the family sets down two different foods, the table acts as intelligent assistant by recommending various flavor combinations or recipes. It can even include suggestions based on other ingredients it knows are available in the household.

The chef can also place ingredients on the table’s smart cutting board to receive visual advice cues on how to prepare them. Not sure how to slice a mango? No problem. Just put the mango on the cutting board and the intelligent table will project graphics and instructions that step you through the process.

The table will even guide you through the preparation of an entire meal from start to finish, using a combination of projected graphics and helpful textual hints.

The smart table in the video isn’t speech-enabled. In watching the video that explains the development process for the Kitchen 2025 concepts, it’s never mentioned whether the choice to leave the intelligent table silent was deliberated or not.

It brings up an interesting question. As intelligent objects evolve, will people be more drawn to chatty assistants or to those that get their message across without speech? Only time will tell. And will we pay extra for a table that instructs us on how to slice a mango, or will we just pull up a You Tube video? It’s probably going to be a combination of compelling features that draws us into the world of intelligently assistive objects. Once we’re there, it’ll be hard to go back.

Army Avatars Push the Envelope on Conversational Technologies

Defense Systems published an article last week about conversational avatars used by the U.S. Army. The avatars are the result of a partnership between the Army Research Laboratory (ARL) and the University of Southern California’s Institute for Creative Technologies (ICT). In the fall of 2013 I did an interview with Arno Hartholt of ICT and published the results in a post called Virtual Humans to the Rescue.

ICT SimSenseiThe Defense Systems article describes several ARL/ICT-built avatars and their specific use cases. Ellie is a virtual human that can pick up on facial and other cues to detect the emotion of the person she’s speaking with. ICT calls the mood-sensing technology used by Ellie SimSensei and you can see a demo in this video. Ellie has been effectively employed to converse with war veterans and detect markers that could signal Post Traumatic Stress Syndrome (PTSD) or depression.

The Army’s continued partnership with ICT points to a goal of expanding the use of immersive training. Conversational embodied virtual agents seem to be a significant part of that work. Two other use cases covered in the article are the Virtual Standard Patient and the Emergent Leader Immersive Training Environment. The virtual patient helps medical students hone their interviewing and diagnostic skills, while the Emergent Leader avatar works with junior leaders to improve their communication skills using role-playing exercises.

The Army has been employing and developing these avatars for several years now and continues to partner with ICT, so the technology must be having some positive results. The Defense Systems article notes that DARPA is also exploring ways to improve a computer’s ability to carry on human-like conversation. DARPA announced the Communicating with Computers project back in February of this year. As yet, I haven’t seen any news about the winners of DARPA’s competitive announcement.

Conversational technologies will undoubtedly be a key area of intelligent assistants going forward, and the ARL’s partnership with ICT is helping to forge new ground in that arena.

Mobile Voice Conference 2015 Presentations Available

A few weeks ago, I wrote a brief synopsis of the AVIOS-sponsored Mobile Voice Conference 2015, which occurred from April 20-21 in San Jose, CA. There were many excellent sessions at the conference on all aspects of speech recognition, natural language processing, intelligent assistants, and technologies supporting the Internet of Things. If you missed the conference, or even if you were able to attend, you now have access to the presentations.

Mobile Voice ConferenceThe AVIOS team has recently made the Mobile Voice Conference 2015 speaker presentations available online. Accessing the slides won’t give the full experience of each session. Many of the speakers used video or voice clips to augment the visual material. And of course the slides themselves can’t convey all the nuanced information that the speakers related to the onsite audience.

Even with these shortcomings in mind, the presentation materials provide a great resource. For those who weren’t able to attend the conference, hopefully the content will whet your appetite to attend the Mobile Voice Conference 2016.

A Look at Hexa Research’s Report on the Intelligent Virtual Assistant Market

Hexa Research, a market research and consulting firm, published an in-depth analysis of the intelligent virtual assistant market last year. I was recently able to review an excerpted sample. In the report, Hexa Research forecast the global demand for intelligent virtual assistants to grow at a Compound Annual Growth Rate (CAGR) of over 30% from 2013 to 2020. That means that over the next five years, the report authors expect the size of the market for virtual agent solutions to more than triple and be valued at around $3 Billion.

Intelligent Virtual AgentThe report cites several factors for this expansive growth. These factors include: increased customer demand for online self-service, a trend towards self-reliance, and the increasing capability of intelligent assistant technologies to quickly provide customers with the answers they need.

Most of the data in the report is presented in terms of market segmentation and global regions. The two market segments used for the research are Large Enterprises and Small and Medium Enterprises (SME). The regions examined are North America, Europe, Asia Pacific, and RoW (Rest of the World–which for this report includes South America, Middle East, and Africa).

North America is cited as the current leader in terms of revenue from intelligent assistants sales. However, growth in emerging markets is expected to accelerate more quickly than in either North America or Europe over the next five years, due to a surge in mobile usage in those areas. Large enterprises far outpace SMEs on spending for intelligent assistants, accounting for over 80% of the market. SME spending is expected to pick up, but remain small when compared to the investments of large enterprises.

Hexa Research offers an interesting overview of the virtual agent technology ecosystem. Their overview distinguishes between four levels of capability, or maturity. What they call Level 1, for example, consists of virtual agents that only engage the customer via a text-to-text interface and that offer responses in a text-based format. To achieve Level 2, the intelligent assistant needs to have at least a branded image and some other characteristics beyond Level 2. I found this maturity model interesting, as the general concept bears some semblance to the capability model that I presented at the recent Mobile Voice Conference 2015. You’ll need access to the full report to see the other distinguishing characteristics across the four intelligent assistant maturity levels that Hexa Research has defined.

You’ll also need access to the full report to see the Porter’s Five Forces Analysis, which examines the level of competition and business strategy associated with threats of new entrants, supplier power, industry rivalry, buyer power, and threat of substitutes. The report also includes an analysis of 16 intelligent virtual assistant vendors and related technology providers. Since the report was published, IntelliResponse was acquired by [24]7.

Based on the sample I reviewed, I can imagine that vendors in the industry or in a closely related industry would be interested in the information in the full report (assuming they haven’t already read it). The report could also provide insights to companies considering an investment in virtual agent technologies or to organizations interested in the market and where it might be headed over the next five years.