Teaching Machines to Understand Us Better

Last week I wrote about the importance of emotional intelligence in virtual assistants and robots on the Opus Research blog. At the recent World Economic Forum in Davos there was an issue briefing on infusing emotional intelligence into AI. It was a lively and interesting discussion. You can watch a video of the half-hour panel. I’ll summarize my key takeaways.

The panel members were three prominent academics in the field of emotional intelligence in computer technology:

  • Justine Cassell, Associate Dean, Technology, Strategy and Impact, School of Computer Science, Carnegie Mellon University, USA
  • Vanessa Evers, Professor of Human Media Interaction, University of Twente, Netherlands
  • Maja Pantic, Professor of Affective and Behavioral Computing, Imperial College London, United Kingdom

TrustMaja Pantic develops technology that enables machines to track areas of the human body that “broadcast” underlying emotions. The technology also seeks to interpret the emotions and feelings of a person based on those inputs.

Vanessa Evans has been working with Pantic on specific projects that apply a machine’s ability to understand emotion and even social context. Evans emphasizes how critical it is for machines to understand social situations in order to interact with human beings effectively.

One interesting project she sites involves an autonomous shuttle vehicle that picks up and delivers people to terminals at Schiphol Airport. They are training the shuttle to recognize family units. It wouldn’t be effective if the shuttle made room for mom and dad and then raced off leaving two screaming children behind. Evans also cites the example of the shuttle going around someone who is taking a photo instead of barging right in front of them. Awareness of social situations is critical if we’re to accept thinking machines into our lives.

Justine Cassell builds virtual humans and her goal is to construct systems that evoke empathy in humans (not to build systems that demonstrate or feel empathy themselves). This is an interesting distinction. Empathy is what makes us human, Cassell notes, and many people have a difficult time feeling empathy or interacting effectively with other people. This is especially true of individuals with autism or even those with high functioning forms of Aspergers.

In Cassell’s work, she has shown that interactions with virtual humans can help people with autism better grasp the cues of emotion that can be so elusive to them under normal conditions. She has also created virtual peers for at-risk children in an educational environment.The virtual peer gets to know the child and develop a rapport, using what Cassell calls “social scaffolding” to improve learning. For example, if a child feels marginalized for speaking a dialect different from that of the teacher, the virtual peer will speak to the child in his or her dialect, but then model how to switch to standard English when interacting with the teacher. The child is taught to stay in touch with her home culture, but also learns how to succeed in the classroom.

Another notable comment by Cassell was that she never builds virtual humans that look too realistic. Her intent is not to fool someone into believing they are interacting with a real human. People need to be aware of the limits of the virtual human, while at the same time allowing the avatar to unconsciously evoke a human response and interaction.

The panel cited other examples from research that illustrate how effective virtual assistants can be in helping humans improve their social interactions. In the future, it may be possible for our intelligent assistants to give us tips on how to interact more effectively with those around us. For example, a smart assistant might buzz us if it senses we’re being too dominant or angry. The technology isn’t quite there yet, but it could be headed in that direction.

Overall the panelists were optimistic about the direction of artificial intelligence. They also expressed optimism in our ability to ensure our future virtual and robotic companions understand us and work with us effectively. It’s not about making artificial intelligence experience human emotion, they emphasized, but about building machines that understand us better.

Why a Knowledge Graph May Power the Next Generation of Siri-Like Assistants

What’s the biggest complaint against Siri and other virtual personal assistants (VPAs)? The complaint I see the most is that Siri doesn’t always give you the answer, but instead displays links to a bunch of web pages and makes you do the work. Try asking Siri right now “Who built the Eiffel Tower?” Siri will display a Wikipedia blurb and map and say “Ok, here’s what I found.” It’s up to you to read through the Wikipedia text (which seems painfully small on my iPhone 6 Plus, but I’m old) and find out that Gustave Eiffel was the designer and engineer.

What is the Knowledge GraphNow try typing the same question into Google. At the top of the screen, you’ll see the names and photos of Gustave Eiffel and Stephen Sauvestre. Not only did Google answer the question directly, it actually told me something I didn’t know, which is that Eiffel wasn’t the only architect who designed the famous tower.

What technology underlies Google’s ability to answer my question directly? Anyone who follows the world of SEO knows the answer to the question is the Google Knowledge Graph. The Knowledge Graph is based on mountains of information about people, things, and their interrelationships that are housed in Wikidata (and formerly in Freebase, acquired by Google in July 2010).

Google’s Knowledge Graph has evolved into the Knowledge Vault and Jaron Collis does a great job at explaining some of the technology that powers it in this Quora response. Google leverages complex data mining and extraction algorithms to continually glean information from the web, disambiguate it, and load it into a structured graph where meaning and relationships are clearly defined and easy to query.

In my recent post on Opus Research called “The Knowledge Graph and Its Importance for Intelligent Assistance,” I look at why this technology is so important for the coming age of VPAs and enterprise intelligent assistants. If you’re a developer in the field of Big Data or Machine Learning, you may very well be building the infrastructure that powers the truly smart digital assistants of the future. Those would be the ones that can answer just about any question without making you read a web page.

What the Age of Chat Means for Intelligent Assistance

Text BubblesThere’s no disputing the fact that messaging platforms, specifically WeChat and Line, have become the most used interfaces on mobile devices in Asia. Because of the traction these platforms have gained, companies are building increasing levels of functionality on top of these services. The hottest trend is the addition of bots that users can message back and forth with as though they were human friends.

Text-based bots are emerging to perform all kinds of services, from hailing rideshare cars to ordering gifts, to figuring out the best deal on complex travel plans. There’s no doubt that these bots represent a new form of what we’ve been calling intelligent assistance.

Mobile users in North America and Europe aren’t using the messaging interface to the same extent as their counterparts in Asia. But if the trend continues, platforms such as Facebook Messenger and perhaps an upcoming Google competitor could become as dominant here as WeChat and Line are in China and Japan.

Many US-based brands are already rushing to get ready for the shift from apps to messaging platforms. What does this mean for intelligent assistants and technologies that companies have already invested in? For more depth on this topic, check out my latest post on Opus Research called “Why Text-Based Commerce is the Future of Intelligent Assistance.”

 

Joining the Opus Research Team!

Opus ResearchFor the past three years, I’ve been writing the Virtual Agent Chat blog in my spare time. My main goal was to learn as much as I could about the evolving world of intelligent assistants, both enterprise and personal assistants. I’ve been exploring the technologies, vendors, and market trends and providing my own perspective along the way.

The outstanding team at Opus Research was kind enough to invite me to participate in their pathfinding Intelligent Assistants Conferences and even to include me as a judge in their Intelligent Assistant Awards over the past two years. I’ve enjoyed learning from the insights of Dan Miller and Derek Top and publishing the occasional guest blog post on the Opus site.

With my recent retirement from federal service, the Opus Research team has invited me to join their team as an analyst. I’m glad to take them up on this opportunity and I really look forward to continuing to learn and write about the intelligent assistant space as an analyst for Opus.

While most of my blogging will happen on the Opus Research site, I’ll continue to post updates and links here. There’s lots to discover and discuss in this quickly evolving space. I hope you’ll join the conversation on the Opus site and on our various social media platforms. See you there!