Teaching Machines to Understand Us Better

Last week I wrote about the importance of emotional intelligence in virtual assistants and robots on the Opus Research blog. At the recent World Economic Forum in Davos there was an issue briefing on infusing emotional intelligence into AI. It was a lively and interesting discussion. You can watch a video of the half-hour panel. I’ll summarize my key takeaways.

The panel members were three prominent academics in the field of emotional intelligence in computer technology:

  • Justine Cassell, Associate Dean, Technology, Strategy and Impact, School of Computer Science, Carnegie Mellon University, USA
  • Vanessa Evers, Professor of Human Media Interaction, University of Twente, Netherlands
  • Maja Pantic, Professor of Affective and Behavioral Computing, Imperial College London, United Kingdom

TrustMaja Pantic develops technology that enables machines to track areas of the human body that “broadcast” underlying emotions. The technology also seeks to interpret the emotions and feelings of a person based on those inputs.

Vanessa Evans has been working with Pantic on specific projects that apply a machine’s ability to understand emotion and even social context. Evans emphasizes how critical it is for machines to understand social situations in order to interact with human beings effectively.

One interesting project she sites involves an autonomous shuttle vehicle that picks up and delivers people to terminals at Schiphol Airport. They are training the shuttle to recognize family units. It wouldn’t be effective if the shuttle made room for mom and dad and then raced off leaving two screaming children behind. Evans also cites the example of the shuttle going around someone who is taking a photo instead of barging right in front of them. Awareness of social situations is critical if we’re to accept thinking machines into our lives.

Justine Cassell builds virtual humans and her goal is to construct systems that evoke empathy in humans (not to build systems that demonstrate or feel empathy themselves). This is an interesting distinction. Empathy is what makes us human, Cassell notes, and many people have a difficult time feeling empathy or interacting effectively with other people. This is especially true of individuals with autism or even those with high functioning forms of Aspergers.

In Cassell’s work, she has shown that interactions with virtual humans can help people with autism better grasp the cues of emotion that can be so elusive to them under normal conditions. She has also created virtual peers for at-risk children in an educational environment.The virtual peer gets to know the child and develop a rapport, using what Cassell calls “social scaffolding” to improve learning. For example, if a child feels marginalized for speaking a dialect different from that of the teacher, the virtual peer will speak to the child in his or her dialect, but then model how to switch to standard English when interacting with the teacher. The child is taught to stay in touch with her home culture, but also learns how to succeed in the classroom.

Another notable comment by Cassell was that she never builds virtual humans that look too realistic. Her intent is not to fool someone into believing they are interacting with a real human. People need to be aware of the limits of the virtual human, while at the same time allowing the avatar to unconsciously evoke a human response and interaction.

The panel cited other examples from research that illustrate how effective virtual assistants can be in helping humans improve their social interactions. In the future, it may be possible for our intelligent assistants to give us tips on how to interact more effectively with those around us. For example, a smart assistant might buzz us if it senses we’re being too dominant or angry. The technology isn’t quite there yet, but it could be headed in that direction.

Overall the panelists were optimistic about the direction of artificial intelligence. They also expressed optimism in our ability to ensure our future virtual and robotic companions understand us and work with us effectively. It’s not about making artificial intelligence experience human emotion, they emphasized, but about building machines that understand us better.

Share your thoughts on this topic

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s