Open Interoperability Standards for the Internet of Things

IoTAn IBM press release reported that a recently formed consortium is aiming to develop open interoperability standards for the Internet of Things. Founding members of the consortium include AT&T, Cisco, General Electric, IBM, and Intel.

The press release refers to the “Industrial Internet,” which I take to mean an ecosystem of connected devices that each accomplish specialized tasks in different industrial domains. The group calls itself the Industrial Internet Consortium (IIC) and it strives to connect more of the physical world with the Internet and establish a “plug and play” capability for devices and technologies.

Stacey Higginbotham writes on Gigaom that the IIC might be busting up the All Seen Alliance party that’s underway at the invitation of Qualcomm and others. These technology companies are already working on standards for the consumer side of connected devices. The standards are currently based on Qualcomm’s AllJoyn protocol, which they turned over to the Linux Foundation for further development. Are they being closed out from the IIC, which might be pushing a competing agenda? It all bears paying attention to.

Something tells me that intelligent assistants will play a role in this broadly connected world of the future. Earlier this week I wrote about Apple’s patent for “Method and apparatus for building an intelligent automated assistant.” To work effectively and gather input from a variety of different sensors and devices, intelligent assistants will benefit from open standards as well. It’ll be interesting to see if the IIC, or some other community, starts to explore open operability standards for virtual agent technologies.

Active Ontologies to Coordinate Intelligent Assistant Actions

In a blog post on Silicon Angle, Mellisa Tolentino writes a good summary of Apple’s recently granted patent 8,677,377 for “Method and apparatus for building an intelligent automated assistant.” The patent document describes a system that enables personal assistants to react to voice, sensor, location, and other input around the home or other locations to proactively issue reminders or execute tasks.

Ontology.jpgOn aspect of the patent that Tolentino doesn’t describe in her summary is the concept of an ontology.
The idea of an active ontology seems central to Apple’s vision of effective, proactive intelligent assistants.
An active ontology works by establishing categories of related concepts and assigning events to them. The ontology can also be used to apply rules to concepts. An example cited in the patent description involves the concepts MovieListing and GeographicalArea. These concepts are interrelated in the ontology. The MovieListing concept, in fact, has a rule that a GeographicalArea is mandatory for delivering suggestions of movies in the area. If the end user asks for a movie listing, but doesn’t provide location information, the automated assistant knows to prompt for that missing input.

Apple’s patent claims that by using active ontologies, it will be easier for less experienced developers to build assistants that can integrate multiple services using a single, visual framework.

The future development of virtual agents / intelligent assistants is almost certainly going to require methods for these assistants to develop situational awareness and understand how to activate capabilities suited to the current environment. It’s uncertain at this point if Apple intends to develop the technologies it describes in the patent, but it seems a pretty safe bet that future intelligent assistant implementations will include these types of concept-based interactions.

Intel’s Jarvis: How Many Personal Assistants Can Dance On The Head of a Pin?

Earlier this year, Intel announced their Jarvis virtual assistant platform, which is an attempt to fit a full-featured virtual assistant into an earpiece.  Named after the artificially intelligent computer assistant of Tony Stark from the Iron Man comic and movies, there are several special features of Intel’s virtual agent.

Jarvis.jpgFirstly, Jarvis fits onto a single chip. Intel makes the Edison microprocessor that houses all the components that power the virtual assistant. These components include Jarvis’s voice recognition and natural language processing features (apparently powered by Nuance). Edison is based on Quark technology and is a mini computer embedded in what appears to be an SD card. That’s a lot of intelligence in a small footprint.

The second interesting fact about Jarvis is that it operates without relying on the cloud. Siri, Google Now, and other mobile personal assistants are cloud-based technologies. When you ask this current generation of personal assistants a question, your voice is sent to servers to parse the meaning of your statement, then to search algorithms to find an answer to your question, and then the result is returned to your mobile device. This round trip and processing time delays your response. With the built-in intelligence of Intel’s Edison chip, Jarvis offers the promise of responding to your inquiry immediately.

As the age of smart machines and the Internet of Everything evolves, there may be a growing demand for intelligent microprocessors that perform all the functions of a personal assistant, but without having to depend on an Internet connection. In a previous post, I wrote about Cognitive Code’s SILVIA technology, which also has the ability to run an intelligent assistant’s brain in a very small footprint. It’ll be interesting to watch the evolution of intelligent personal assistant technology as the world of smart devices expands.

Pitch an IBM Watson App or Roll Your Own Watson Jr.

I’ve written previously about the IBM Watson cognitive computing platform. I recently visited the Watson website and saw that IBM is holding a sort of crowdsourcing contest to find talented developers who have ideas for great Watson powered apps. The IBM Watson team is soliciting proposals until March 31. According to their timeline, they’ll review the proposals and down-select to a group of 25 finalists by April 28. Is seems that they’re looking for seasoned teams that have previous experience delivering mobile apps. Ideally, they’re hoping to get proposals from teams who have innovative and solid ideas that leverage Watson’s ability to sort through structured and unstructured data and present possible answers to specific inquiries. My guess is that IBM is most interested in apps that zero in on narrow domains (e.g. healthcare, real estate, or weather) but have applicability to a broad audience.

IBM WatsonTeams that make it into the top 25 will be given access to a Watson API and sandbox so that they can build out prototypes of their concepts. Out of the top 25, judges will select 5 finalists to pitch their app ideas to a panel of live judges. It’ll be interesting to see the results. If you’ve got an idea for an app that leverages Watson’s powerful question answering powers, don’t miss this opportunity!

For developers who’d rather go it alone, but who still want to reap the benefits of cognitive computing. perhaps you might want to roll your own question answering platform. A while ago, I ran across a great post by Tony Pearson that describes how to build a version of IBM Watson in your basement (or garage or wherever suits you). The article goes into an amazing amount of detail. Even if you don’t have ambitions to create your own Watson Jr., the information is enlightening and helps you gain a better understanding of what makes the Watson cognitive computing platform tick.

CloudMagic Cards Could Foreshadow Personal Assistant Features

Techcrunch’s Sarah Perez recently made a post about a new capability offered by CloudMagic, a search-focused email app. While the CloudMagic solution doesn’t quality as a personal assistant or virtual agent, the new functionality Perez describes is intriguing. It might be worth exploring providing similar capabilities in a mobile personal assistant app.

CloudMagic Cards.jpgCloudMagic Mail’s new feature is called “cards.” Cards are basically integrations to third-party apps that are seamlessly integrated with the CloudMagic email interface. Examples of several of these integrations are shown in a CloudMagic video.

If you receive an email that contains an interesting recipe, you can easily call up an Evernote card to file the recipe away for future reference. A tap allows you to take a news story you like straight from an email and into Pocket. You can add tasks to Trello. If you get an email from a sales prospect, you can quickly add the contact to Salesforce.com.

The CloudMagic cards knit all of your support tools together as if they were one unified application. The tools themselves become essentially invisible. It’s just the information that remains, and you’re able to store and access this information in the way that’s most convenient and effective for you.

CloudMagic cards brings up the question: couldn’t this work with digital personal assistants? The best personal assistants are agents that can bridge the gap between all the different apps and tools that you need. I’d like a personal assistant that I can instruct to take information out of my emails and include it in project plans, or powerpoint presentations, or enterprise applications.

How can we create personal assistants that have this inherent integration capability? In his book The Software Society, which I reviewed a few weeks back, Bill Meisel writes about the need for a set of web standards that could help personal assistant applications locate specific content (maybe something like snippet mark-up?). Perhaps these standards could include APIs to third party applications that would allow any personal assistant to access them and update or retrieve data. These types of standards might help us realize the dream of an efficient ecosystem of capabilities that our personal assistants can leverage to make our lives easier. The CloudMagic cards may be just a taste of good things to come.

The Problem With Today’s Chatbots

CNET recently published an interview with Bruce Wilcox, the creator of the open source chatbot platform Chatscript. The interview, by Daniel Terdiman, sheds some light on what Wilcox believes are the strengths and weaknesses of today’s chatbot technologies.

Before getting into the interview content, Terdiman describes the notoriety behind Talking Angela, a sassy chatbot mobile app that Bruce created with his wife Sue Wilcox. Rumors are rampant that the app could be a front for pedophiles. Curious smartphone users have been downloading the app in droves on either their iPhones or Android devices, sending Talking Angela to the top of the app store charts.

Chatbot likes horses.jpgWilcox insists that there’s absolutely no validity to the rumor. But perhaps the Wilcoxs’ strategy for making a successful chatbot contributed at least in some part to the urban legend. Bruce Wilcox strives to create believable personalities in his chatbots and Talking Angela, who appears as a cat, asks those who talk to her for personal information such as name, age, and apparently the names of friends as well. She also strikes the familiar, chatty pose of a teenager. All of these tactics help her appear more real and cover up her conversational superficiality.

This story got me thinking about how chatbots work and how they compare to personal assistants that are gaining so much traction in the mobile marketplace.

In the interview. Wilcox talks about Chatscript and his approach to creating chatbots. He points to past successes, including the fact that his chatbots are the only ones to have made it into the final rounds of the Loebner Prize competition for the past four years. You can check out the script of a 15-minute conversation that occurred between a judge and the Angela bot during the 2012 Chatbot Battles. Wilcox describes the conversation as close to great, presumably meaning that his chatbot responded to the judge’s questions and comments in a believable and even human-like way . It’s certainly an impressive performance for a chatbot, but if you read through the script, I think you’ll see that Angela’s responses are often far from convincing.

The disappointing fact is that chatbot technology, when compared to today’s conversational search algorithms, just isn’t very good. The fundamental structure of chat scripts requires that the bot creator anticipate almost everything the other person will say. This is a huge limitation, so it’s interesting to examine how successful bot masters work around it.

Wilcox says that you can trip up a chatbot by asking questions that rely on physical inference. An example would be a question like: “If I drop a rock into a pond, but the pond is frozen, what will happen?” That’s obviously going to be a tough question to anticipate and to answer appropriately.

I think there are other, even easier ways to trip up a chatbot. One way is to ask it for further details about something it just said. If the chatbot says “I like to play checkers,” ask it “what do you like about it?” Every chatbot I’ve conversed with gets stumped by this recursive questioning and will answer with something completely unrelated to checkers or why it likes to play that game. The chatbot doesn’t know what “it” in your question refers to. It doesn’t maintain conversational context from one sentence to the next. This makes every chatbot seem like a complete airhead.

For unanticipated questions or comments, the bot master needs to throw in off hand comments that don’t come across as completely off the wall. As mentioned earlier, Wilcox’s strategy is to create a character with a personality and a back story. Angela is a flighty teenager with definitive tastes in music and fashion. She can chatter on about the specifics of pop stars and other icons of modern culture. Teenagers are self-absorbed by nature so it’s not completely unexpected for them to ignore questions or go off topic.  If you ask something Angela can’t match with an existing response pattern, it might respond with “I’m a little monster (claw claw)” (example taken from the ChatbotBattles script linked to above). Since you’re conditioned to think you’re talking to a teenager, this off topic response might just be convincing enough to keep the conversation going. If the exact same response comes up again, though, you’re likely to see through the thin veneer and switch to another activity.

Who would you rather talk to: a search algorithm that doesn’t pretend to have a personality but that can understand and appropriately respond to pretty much anything you can think to ask, or a make-believe teenage airhead? That’s the challenge we face as we try to create meaning conversational assistants.

Cortana Demo Released – Does It Look like “Her”?

Cortana and Her.jpgThe Verge reported that a demo of Microsoft’s digital personal assistant Cortana for Windows 8.1 was released today. Cortana is represented by a circular spinning sphere image that looks remarkably similar to the image used in Spike Jonze’s Her movie to represent the intelligent operating system Samantha. Is the similarity just a coincidence, I wonder? Another similarity: as part of the set-up process, the system asks you personal questions so that it can learn more about you.

In the demo, Cortana doesn’t ask about your relationship with your mother(!), but it does appear to ask what you like to do, what you enjoy reading, and what your culinary preferences are. It has you choose from among a list of multiple choice answers for each question. Cortana presumably stores your responses in the Notebook feature that The Verge reported on in an earlier article and that we wrote about last week in the “privacy fence” post.

According to The Verge, Microsoft will officially launch the Cortana mobile personal assistant in April at their Build conference. You can see some still shots of Cortana by checking out this Verge post from earlier today.