Intelligent Assistant Awards 2015 – Time to Apply!

Opus Research recently announced that their Intelligent Assistants Conference 2015 will be held at the W Hotel New York from October 13-14, 2015. In conjunction with last year’s conference, Opus held an awards contest for the top customer-facing intelligent assistants. The 2014 award winners were Domino’s Dom, Hyatt Hotel’s virtual agent reservation system, and the U.S. Army’s Sgt. Star.

Intelligent Assistants ConferenceThis year Opus will hold the 2nd annual Intelligent Assistant Awards (IAA). The contest is open to all operational intelligent assistants that function primarily in a self-service role. That means the focus is on customer-facing technologies that help users carry out activities such as making purchases, completing financial transactions, and answering support questions.

In assessing the entrants, the judging team will consider various characteristics. The features under evaluation will include quality of the user interface, overall quality of the assistance, consistency across various media, accuracy, and personality.

Last year there was a strong list of entrants. This year Opus expects even more participation in the contest. If your organization uses an intelligent assistant to provide self-service to your customers, I highly encourage you to enter the contest. Applying for the award isn’t too time-consuming and you’ll earn two free passes to the Intelligent Assistants Conference 2015 just for entering. Once you’re at the conference, you’ll have the opportunity to share your knowledge and learn from other innovators in the growing field of self-service and intelligent assistance.

Be sure to check out the details on the Intelligent Assistant Awards page. If you need any help applying, reach out to the Opus Research team and I’m sure they’ll be more than happy to assist.

Crystal – Intelligent Assistant for Better Emails

In my last post I alluded to the promise of intelligent assistants in the enterprise. I was envisioning a world in which natural language understanding combines algorithms and other technologies to give office workers the perfect personal assistant. Our assistants will be omnipresent with data, reminders, insights, and coaching tips to ramp up our effectiveness and propel our careers to the next level.

Crystal KnowsJust this week I stumbled upon an application that might be added to the future enterprise assistant’s repertoire. While listening to a podcast from Note to Self, I heard about an application called Crystal. Crystal is designed to help you write more effective emails by coaching you to tailor your email message for your intended recipient.

How does Crystal work? Based on the Note to Self broadcast, Crystal tries to build a profile of the recipient by searching for samples of their writing in their LinkedIn profile, tweets, and other publically available social media content. Crystal then applies an algorithm to this content to determine the intended recipient’s most likely personality style, based on the DISC personality model.

Using the DISC profile, the Crystal algorithm derives a plausible model of the recipient’s preferred communication style. It’s then able to coach you in drafting an effective email message. According to the Crystal website, the coaching solution is meant to help you communicate with empathy.

The demo on the Crystal website shows suggestions for emailing with Mark Cuban, who apparently prefers short and direct communications. As you write the email, Crystal actually recommends rephrasings that will help you get your point across more effectively to your audience. When emailing Cuban, Crystal suggests replacing longer sentences such as “I am afraid I won’t be able to make it this Friday,” with a shorter, more to the point sentence along the lines of “I won’t be able to make it this Friday.”

If you’re inclined to hem and haw about a possible new date for the meeting, Crystal coaches you to suggest a specific date for Mark to accept or decline, such as “Can we reschedule for next Tuesday?” By the time you’re finished writing your message, you feel confident that you have a good shot at connecting with Mark on his level and in the way he prefers.

I’ve had training on DISC personality styles, including how best to communicate with people depending on their particular personality profile. I’ve found it almost impossible to keep all this in mind when the time comes to actually talk or email people. My guess is that I slip back automatically into the communication style that I prefer.

If Crystal can truly help us improve the effectiveness of our communications, it’ll be a great addition to the enterprise productivity toolset of our future personal intelligent “enterprise” assistant. The Note to Self podcast brought up the topic of privacy concerns, as well as the worry that leveraging personality insights to tailor communications could lead to manipulation. These are valid concerns that need to be addressed. But Crystal provides a glimpse into the powerful assistance that intelligent software could soon provide those of us forced to navigate the slippery slopes within the enterprise.

Intelligent Personal Assistants – Destined for the Enterprise?

We’ve grown accustomed to the many ways intelligent personal assistants help us in our day-to-day lives. Many of us rely heavily on voice-driven commands for initiating basic web searches, making calls, dictating texts, setting reminders and timers, playing music from our favorite artists, and more. We’ve also come to expect helpful cards giving us information we need about traffic, airline flights, hotel confirmation numbers, and package tracking, to name but a few.

Intelligent Assistants in the EnterpriseWhat happens when we start relying on personal assistants to help us perform better at our jobs? Today I published a guest post on the Opus Research website called “How Google, Apple, and Microsoft Are Building Intelligent Assistants for the Enterprise.

Do enterprise “personal” assistants have the ability to give employees who use them a leg up over those who don’t? Could the right intelligent assistant play the role of a wingman that propels your career to the next level?

Join the conversation on this and other related topics on the “Intelligent Assistants Developers and Implementers” group at LinkedIn.

SoundHound Now Provides Intelligent Voice-Driven App Solutions

SoundHound SoundHound, the company behind the music recognition app of the same name, recently unveiled enhanced features under the name Hound that bring it squarely into the intelligent personal assistant camp.

A Techcrunch article from last week quotes founder and CEO Keyvan Mohajer as saying the company was always working towards a much larger vision than just providing a music recognition app. The Hound engine performs both speech recognition and natural language processing simultaneously in real-time, instead of separating them into different tasks. This is a technological advancement that enables Hound to return query responses very rapidly.

Based on the demo, the Hound engine can also process complex queries. An example is: “show me pet friendly hotels in Chicago under $300 a night with 3 or more stars excluding bed and breakfasts.” That’s pretty impressive.

The Hound assistant is available as an invitation-only beta Android app and an iOS version is in the works.

Perhaps even more interesting is that SoundHound is making its technology available for developers and app owners that want to “houndify” their apps. Just within the last few weeks I’ve written about MindMeld from Expect Labs and IBM’s Bluemix, both of which offer platforms and tools for voice-enabling apps. It seems there’s a real trend afoot.

The Houndify solution advertises itself as a provider of the full spectrum of services for creating voice-driven apps, including the same fast speech recognition and natural language processing engine that supports the Hound assistant. The Houndify website indicates that all operating platforms are supported: iOS, Android, Windows, Unix, Raspberry Pi, and others. You need an invitation code to create a Houndify developer account and the website doesn’t currently list any pricing information.

Will SoundHound’s Hound succeed as a voice-driven intelligent personal assistant? Will Houndify thrive as a platform for voice-driven apps? Both markets are certainly filled with opportunity. Now it looks like there’s yet another dog in the hunt. (Sorry. Couldn’t resist).

Building a Chatbot Robot From LEGO Bricks

LEGO Chatbot RobotHave you ever pondered a better way to get people to make donations to a good cause? How about constructing a talking, dancing robot out of LEGO bricks that engages people in conversation and asks for monetary contributions? I bet you didn’t think of that!

Well, some folks in New Zealand did, as reported in a story by Radio New Zealand News. Shogo Nishiguchi, a Masters student from Osaka University, worked with New Zealand researchers from the University of Canterbury to build just such a LEGO robot as part of an Imagination Station project. The LEGO fireman moves, dances, and does its best to hold a conversation. It also uses light-hearted humor to ask for donations to keep the Imagination Station center running.

The article includes a link to a brief but informative video that provides details about the actual construction of the talking fireman LEGO-bot. The team used the Unity Game Engine for programming the robot’s animation. The fireman includes several actuators, a camera, and a speaker so that it can talk to visitors. Arduino is used to control the robot’s movements.

The team also makes use of a chatbot conversational database that it refers to simply as “Chatbot.” I’m not sure what the exact source of this database is, but it appears to be similar to (albeit more limited than) the A.L.I.C.E. conversational database constructed by Dr. Richard Wallace of Pandorabots. The chatbot database used for the fireman is from the perspective of an extraterrestrial, so it doesn’t work all that well for a firefighter. But it’s probably better than having a robot that can’t engage in even a simple level of chit chat.

The team also used a dialog scripting engine to create the custom dialog for talking about Imagination Station and asking for donations. The humor comes in when the firefighter robot kids potential donors that he only accepts large denomination bills, as in:

‘Oh, I’m sorry – I only accept $50 or $100 notes. No, just kidding, you can put in whatever you want.’

The question on everyone’s mind now is: Will we be seeing talking Santa robots ringing the bell by the donation bucket in front of our favorite retailers this coming Christmas? Maybe the technology isn’t quite ready for primetime yet, but I have the feeling it won’t be long.

Virtual Health Assistants for Improved Patient Outcomes

Virtual Health AssistantThomas Morrow, chief medical officer for Next IT, published an article in the American Journal of Managed Care (AMJC) entitled “Automated Intelligent Engagement Using a Virtual Health Assistant.” Morrow points out that human-to-human engagement is critical to improve patient outcomes, but this level of engagement is costly and hard to achieve in reality.

Virtual Health Assistants (VHAs) can fill the gap by standing in as proxies for health care providers and by helping patients with answers to questions, reminders, and motivational support. Morrow suggests that VHAs might even be better suited than human practitioners when it comes to following up on sensitive personal topics.

For example, he points out that a patient might be less inhibited about discussing a prescribed drug’s impact on sexual performance than if the patient were interacting with another human. In fact, I’ve written before about research that supports the view that people are generally more open with virtual humans / avatars than they are with other people (and this certainly holds true when the other person is an authority figure such as a physician).

Morrow sees huge promise for VHAs. He describes the technology as being in its teenage years.  VHAs will be able to automate activities that currently reduce the amount of quality time doctors have to interact with their patients–such as record keeping in electronic health systems.

Morrow also envisions VHAs of the future as effective health coaches. The ideal VHA will get to know the patient and their lifestyle and then influence positive choices related to diet, exercise, and overall wellness. With the right motivational VHA as coach, and incentives and rewards, individuals might even avoid conditions that lead to disease and dependency on prescription medicines. Many of the activity trackers available in today’s wearables already allude to this capability.

Morrow lists the following companies as those that offer virtual assistant technologies specializing in health care.

I’ve written about Codebaby and Geppetto recently. Based on the AJMC article, Morrow plans to take a closer look at some of the companies in the list. I’ll be sure to keep an eye out for his observations.

 

Conversational Computing and IBM’s Bluemix

O’Reilly hosted the Fluent conference back in April, targeting web developers interested in hearing from experts about all the latest development tools and trends. Stew Nickolas from IBM offered an interesting keynote on Conversational Computing and you can watch this video of his full presentation.

Bluemix servicesAs with all great keynotes, Nickolas didn’t just talk. He gave an impressive demo of a voice driven “robot” that he’d programmed using IBM’s Bluemix development platform. Nickolas could control the little ball-shaped robot’s colors and behaviors by talking in natural language to a command center in the cloud.

The purpose of the demo was to whet the appetite of web developers for the possibilities of the Internet of Things (IoT), as well as to alert them to the full buffet of ready-built APIs available in Bluemix.

One of the APIs that Nickolas leverages in his keynote demo is the Bluemix Speech to Text service that uses IBM Watson technology. While this service by itself doesn’t include natural language processing capabilities, it uses various machine intelligence techniques to generate a more accurate transcription. You can give the service a test run in the live demo.

You can pair the Speech to Text module with something called the AlchemyAPI to build smart apps that can interpret natural language and also images. With those powers, your app could understand conversations, documents, and photos. The AlchemyLanguage product offers a wide range of capabilities, including keyword extraction, entity extraction, sentiment analysis, concept tagging, text extraction, and more.

The Bluemix services catalog includes a natural language classifer, machine translation, question and answer services, cognitive graph, and many more useful APIs. Most of the services appear to still be in beta and are currently offered for free. The AlchemyLanguage API is free for up to 1K transactions per day and $250/month for up to 90K transactions a month.

Conversational Computing doesn’t appear to be a widely used term yet, but the concept is central to intelligent assistants. The IBM Bluemix set of services is worth a look if you’re thinking about building the next great mobile personal assistant or other specialized smart app.