News was released last week that Intel purchased some of the assets of Ginger Software for between $20 and $30 million. The purchase included Ginger’s personal assistant technology.
Seeking Alpha, which reported on the Ginger acquisition from an investor’s point of view, saw the purchase as a positive move. I wrote in an earlier post about Intel’s Jarvis technology, which is built on a specialized mini computer. This powerful chip, which runs Linux and other software tools, supports a full fledged virtual assistant without relying on the cloud. It makes it feasible for Intel to load the Ginger personal assistant directly onto a wearable device, such as the Jarvis smart headset.
Last year, Intel acquired Indisys, which was called the “intelligent dialog” company. I couldn’t find further information on how Intel utilized the Indisys technology. In the case of the Ginger purchase, Intel has added at least two engineers to its team. Yael Karov, CEO and Chief Scientist of the personal assistant division of Ginger and Micha Breakstone, an expert in NLP, are reportedly both moving over to Intel as part of the acquisition.
I found a lengthy interview with Yael Karov from last February, which includes some impressive demos of the Ginger virtual assistant technology.
As others have noted, competition in the hot virtual assistant space is picking up. It makes life interesting for those of us keeping a close eye on the virtual agent / personal assistant market.
Earlier this year, Intel announced their Jarvis virtual assistant platform, which is an attempt to fit a full-featured virtual assistant into an earpiece. Named after the artificially intelligent computer assistant of Tony Stark from the Iron Man comic and movies, there are several special features of Intel’s virtual agent.
Firstly, Jarvis fits onto a single chip. Intel makes the Edison microprocessor that houses all the components that power the virtual assistant. These components include Jarvis’s voice recognition and natural language processing features (apparently powered by Nuance). Edison is based on Quark technology and is a mini computer embedded in what appears to be an SD card. That’s a lot of intelligence in a small footprint.
The second interesting fact about Jarvis is that it operates without relying on the cloud. Siri, Google Now, and other mobile personal assistants are cloud-based technologies. When you ask this current generation of personal assistants a question, your voice is sent to servers to parse the meaning of your statement, then to search algorithms to find an answer to your question, and then the result is returned to your mobile device. This round trip and processing time delays your response. With the built-in intelligence of Intel’s Edison chip, Jarvis offers the promise of responding to your inquiry immediately.
As the age of smart machines and the Internet of Everything evolves, there may be a growing demand for intelligent microprocessors that perform all the functions of a personal assistant, but without having to depend on an Internet connection. In a previous post, I wrote about Cognitive Code’s SILVIA technology, which also has the ability to run an intelligent assistant’s brain in a very small footprint. It’ll be interesting to watch the evolution of intelligent personal assistant technology as the world of smart devices expands.
Techcrunch ran an article last week scooping the fact that Intel acquired a Spanish natural language startup back in May of this year. The acquired company was called Indisys and they specialized in computational linguistics and virtual agent (or “intelligence assistant”) technologies. Ingrid Lunden of Techcrunch speculates that Intel will use the Indisys technology to continue building out its “perceptual computing” framework.
Perceptual computing is the term that Intel seems to have coined for software than can sense a user’s motions and gestures to control the user interface. Intel offers a perceptual computing software developer kit (SDK) that developers can use in conjunction with a special camera to create gesture-based games and other interactive software.
So how does natural language fit into the vision for perceptual computing? There’s an obvious link between gesturing and speaking. One can imagine that besides just motioning at a game to get the onscreen character to move, a player would like to be able to give verbal commands as well. Interacting with software by gesturing and talking has implications beyond gaming platforms. In her Techcrunch article, Lunden mentions that Intel has demonstrated multiple devices that showcase their “gesture and natural language recognition business.”
Now that Intel has purchased Indisys, they’ll have at least the basis for advanced language recognition and even virtual agent technologies to incorporate into their product set. It remains to be seen how perceptual computing and conversational software will intersect.