In a blog post on Silicon Angle, Mellisa Tolentino writes a good summary of Apple’s recently granted patent 8,677,377 for “Method and apparatus for building an intelligent automated assistant.” The patent document describes a system that enables personal assistants to react to voice, sensor, location, and other input around the home or other locations to proactively issue reminders or execute tasks.
On aspect of the patent that Tolentino doesn’t describe in her summary is the concept of an ontology.
The idea of an active ontology seems central to Apple’s vision of effective, proactive intelligent assistants.
An active ontology works by establishing categories of related concepts and assigning events to them. The ontology can also be used to apply rules to concepts. An example cited in the patent description involves the concepts MovieListing and GeographicalArea. These concepts are interrelated in the ontology. The MovieListing concept, in fact, has a rule that a GeographicalArea is mandatory for delivering suggestions of movies in the area. If the end user asks for a movie listing, but doesn’t provide location information, the automated assistant knows to prompt for that missing input.
Apple’s patent claims that by using active ontologies, it will be easier for less experienced developers to build assistants that can integrate multiple services using a single, visual framework.
The future development of virtual agents / intelligent assistants is almost certainly going to require methods for these assistants to develop situational awareness and understand how to activate capabilities suited to the current environment. It’s uncertain at this point if Apple intends to develop the technologies it describes in the patent, but it seems a pretty safe bet that future intelligent assistant implementations will include these types of concept-based interactions.