Virtual Human Toolkit – A Treasure Trove for Virtual Agent Developers

Virtual Human ToolkitThe University of Southern California’s Institute for Creative Technologies offers a Virtual Human Toolkit for constructing animated conversational characters. I ran across the Virtual Human Toolkit while browsing through the official proceedings from the 13th International Conference, IVA 2013, Edinburgh, UK, August 29-31, 2013. A team from the USC Institute for Creative Technologies wrote a paper titled “All Together Now: Introducing the Virtual Human Toolkit” that was presented at  IVA 2013.

The goal of the Virtual Human Toolkit is to provide a suite of ready-made components that developers can use to more quickly build well-rounded virtual agents. Virtual agent characters can have many benefits, but they are comprised of numerous complex technical components. Most teams don’t have access to all the knowledge and skills needed to build virtual characters with a broad range of capabilities. A versatile virtual human would ideally be able to simulate human behavior, perceive and adequately react to the behavior of others, and respond appropriately to questions or statements. Virtual humans are costly to develop, so the toolkit from USC’s Institute for Creative Technologies should be a great help to small teams looking to experiment with the technology.

Based on the documentation available, the virtual human toolkit currently consists of the following components:

  • Speech Recognition
  • Natural Language Understanding
  • Nonverbal Behavior Understanding
  • Natural Language Generation
  • Nonverbal Behavior Generation

These capabilities are embedded within individual modules that are all connected via an underlying messaging platform. A core module is called Multisense. This module enables the virtual human to track and interpret the non-verbal behavior of its human conversational partner. The virtual human can track facial expressions and body language using various input devices and then analyze the input to make generalizations about the human conversational partner’s emotional state.

The NPCEditor module understands incoming dialog and then determines an appropriate response. Currently the Virtual Human Toolkit uses chatbot-like pattern matching technology to engage in dialog. The editor does appear to have the ability to use statistical models to find the best perceived response if it encounters an utterance that doesn’t match, so this capability would put it ahead of basic pattern matching scripts.

The NonVerbal Behavior Generator helps the Virtual Human plan out its nonverbal responses, which can consist of things like nodding, gesturing with arms and hands, and so on. Other components work to synchronize behaviors associated with conversational speech, which include speech, gaze, gesturing and head movements.

In their IVA 2013 article, the Institute for Creative Technologies team suggests a number of practical applications for the Virtual Human Toolkit. Among the types of uses for the technology are: Question-Answering Characters, Virtual Listeners, Virtual Interviewers,  and Virtual Role-Players.

The toolkit is available free of charge for the academic research community and for U.S. Government uses. There’s an email address to use if you’d like to contact the team about using the toolkit for commercial purposes.

Share your thoughts on this topic

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s