Pandorabots Offers Artificial Intelligence as a Service (AIaaS)

Pandorabots AIaaSPandorabots has rolled out their Artificial Intelligence as a Service (AIaaS) platform. I wrote about the recently launched Pandorabots Playground in a previous post. The Playground offers a great option for businesses and app developers looking for a do-it-yourself approach to building conversational virtual assistants. The new AIaaS gives developers a ready-made toolkit for incorporating interactions with their conversational bots into apps and websites using client-side code. Both the Playground and the AIaaS platform support the new and improved AIML 2.0 bot scripting language. Not only does AIML 2.0 offer many new features over the previous AIML 1.x versions, but it also enables your app to control native operations on a mobile device, such as making a phone call or setting reminders.

To get started with the Pandorabots AIaaS platform, you need to register for an account. (The AIaaS account is separate and distinct from your Playground account). You can choose between a freebie option and several paid plans. The freebie option offers you 2 bots and up to 25 API calls per day. The paid plans start for as low as $9 a month for 10 bots and up to 250 API calls per day. More robust plans are available.

When you activate your account, an app is created for you. Once the Pandorabots team approves your app, it acts as your gateway into the Pandorabots API. You can use the APIs to access and control your bot from a website, mobile app, or social media site. The bot must be hosted on the Pandorabots hosting platform. The APIs are RESTful (adhering to REST architectural principles), so you can use client-side code to access all the functions of your server-side bot.  Software Development Kits (SDKs) are available for several common web programming languages that provide methods for all of the Pandorabots APIs.

Pandorabots APIAs part of your app, you’re issued a unique User Key and Application ID. You use these two parameters when you make calls to the Pandorabots APIs. The API Documentation provides a basic overview of how everything works. It also provides access for testing out the API operations, which include: Create bot, Delete bot, Upload file, Compile bot, and Talk to bot, among others. You can try most of them out from the documentation platform by providing your User Key and App ID.

To actually build and deploy your bots, you’ll want to leverage the detailed documentation available in the Pandorabots Blog. For example, there’s a post on Creating a Virtual Service Representative that walks you through all the steps of building and deploying a simple conversational assistant. There’s even a basic template that you can leverage to provide your virtual assistant with the core information it needs to answer frequently asked questions about your business. While you’ll want to give your bot more information over time, the template can get your assistant up and running quickly.

DIYers should definitely take a look at the Pandorabots Playground and the newly launched AIaaS. If you’d rather leave the scripting, programming, and configuring to the experts, It seems that Pandorabots also offers consulting and other engineering services.

HOMER: Personal Home Intelligent Assistant

First there was Ubi. Then Amazon announced the forthcoming Echo. Now a graduate of India’s National Institute of Technology, Karnataka has launched an Indiegogo campaign for HOMER, a home automation personal assistant.

HOMERHOMER appears to be in the concept phase. It has an ambitious list of possible features. Its home security features are to include a camera, motion detector, and door and window sensors. The goal is to also interface HOMER with all your home automation devices, so that you can control your thermostat, TV, lights, and other smart appliances with voice commands.

Acting as a personal assistant, HOMER is envisioned to be able to perform many tasks. It will set alarms and adjust them based on your preferences and factors like day of the week and weather. It can access your calendar and look at traffic and weather to help you reach meetings on time. HOMER will keep track of the whereabouts of family members and send or receive messages from/to them.

Children can interact with HOMER to ask it questions and get help with their homework. I found it interesting that one of HOMER’s anticipated skills is the ability to narrate stories. As it turns out, storytelling abilities was one of the features on my wishlist in my recent post on Amazon Echo.

Will HOMER become a reality and will it be able to fulfill the ambitious dreams of its designers? Only time will tell. The HOMER Indiegogo funding campaign has a long way to go to meet the goal. But the fact that the concept for HOMER exists is evidence that the idea of home-based, voice-enabled intelligent assistants is in the air. Within a year or two, will we all have a smart box in our homes that talks, listens, entertains, and assists us in ways no other device currently does? It’s hard to predict what will succeed with consumers. But if the trend continues, consumers may have quite a few home assistant brands and models to choose from.

xDroid – An Intelligent Assistant for Social Search

xDroid is an intriguing personal intelligent assistant project that launched on Kickstarter a while back and that has only a few days left to meet its funding goal. The project was incubated at Columbia College Chicago’s Business & Entrepreneurship Department, so it comes with a solid pedigree.

I’m not sure why the app is called xDroid. It doesn’t seem to be tied to the Android operating system. The demo videos show the app working on an iPhone. But regardless of the name, the xDroid has a different and interesting take on the concept of what makes an effective personal intelligent assistant.

xDroidTo try and summarize the concept in a nutshell, xDroid’s main focus seems to be to connect you with things. Specifically, it aims to connect you more easily to people who are offering products or services that you are looking to buy. Likewise, it can also help you find customers for the things you hope to sell. It also connects you to others who have information that might be useful to you.

The app learns about you, your preferences, and your social network. If you need an illustrator for you latest book, xDroid will let others in your network know and it will search through the network to see if there’s an illustrator out there offering services. xDroid will facilitate a connection between you and the illustrator, or give you information about competing illustrators so that you can select the one you want.

Two aspects of xDroid stand out: the extremely futuristic and innovative user interface and the “social search” concept. Let’s start with the user interface. The animated screen captures on the Kickstarter page look like something off the dashboard of an alien spacecraft. The underlying concepts seem to be about showing connections between things: connections between you and others in your extended network, connections between your preferences and what others are seeking to buy and sell, connections between your friends and their current activities and interests.

The concept of social search isn’t new. Neither is “social selling.” That’s what LinkedIn is all about, I suppose. But the xDroid concept seems a bit more seamless. Based on the way the app is described, it appears that xDroid can anticipate what you want to buy and initiate searches in the background that span the web as well as all of the contacts in your extended network. The xDroid search engine pulls your friends and acquaintances into its search algorithm. It searches the real world, not just the web. Crowdsourcing replaces the drudgery of shopping. It’s a cool idea. Personally, I’d find the concept of social search even more intriguing if it wasn’t as much about buying and selling as about sharing ideas and learning from your social network. If you’re trying to solve a tough problem, xDroid could help quickly connect you to experts or others interested in helping you find a solution, for example.

I don’t know how much of the xDroid app is built out and how much is still in the concept phase. The Kickstarter campaign seems to have stalled, but I wish the team luck. It seems to me that they’re on to something.

Software Advice’s Report on Self-Service Channels

The team at Software Advice, an online consultancy for customer relationship management software, has published the results of a survey on customer self-service channels. The report contains interesting information on the effectiveness of a range of self-service technologies and how companies measure performance. Virtual assistants are included in the study.

SurveyTo obtain the results, Software Advice surveyed 170 professionals within the customer service departments of firms across a broad range of industries. In order to select the 170 participants for the survey, Research Now, a third-party research partner of Software Advice, narrowed down a larger group of possible participants to just those who had actually implemented self-service channels in their business and who also had direct knowledge of how the business measured the success of those channels.

It turned out that the most commonly offered customer self-service channels are FAQs and Knowledge bases. Interactive Voice Response (IVR) phone systems comprised the next most common channel. The least commonly offered self-service channel turned out to be virtual agents / virtual assistants. Surprisingly, though, over 50% of those surveyed indicated that their companies had virtual assistants. I would have expected the percentage to be lower, given that virtual assistants are still an emerging technology. Then again, the prescreening narrowed the participants down to those who are already fairly advanced in their use of self-service.

The next major point of inquiry was whether the survey participants monitored the effectiveness of their various self-service channels. They were also asked to provide input on what metrics they used to monitor performance and how effective they considered the metrics to be. About 60% of respondents said that they formally tracked the effectiveness of virtual assistants. I would have expected the number to be closer to 90% or more. To the best of my knowledge, most virtual assistant vendors offer out of the box metrics with their solutions. One type of easily implemented metric is a simple yes/no survey at the end of a chat session that asks the user if the virtual assistant answered his or her question. This user satisfaction metric was indeed the measure that survey respondents employed most frequently. A second metric could be whether the assistant found a response to the question in the knowledge database (or on the company website) or if it came up empty. This type of metric is generally captured as part of the virtual assistant’s conversation log file.

Of the respondents who said they tracked user surveys related to virtual assistant interactions, just shy of 75% said they were satisfied that this is an effective performance gauge. To me, that means there is still room for improving how we measure the reliability of virtual assistants and true customer satisfaction levels. It would be beneficial to have a more accurate, less intrusive method than having to ask the customer if the assistant gave them a useful answer.

A final area explored by the survey was the overall effect of self-service channels on the performance of live customer contact centers. As would be hoped, it turns out that when customers have access to self-service channels, fewer of them call the support desk. As a result, live customer support personnel can take the time to improve the service they give to customers who do call in. By lessening the burden on live support agents, self-service channels helped the majority of survey respondents experience measurable improvements in the following areas:

  • Speed to answer calls
  • Cost per contact
  • First-level resolution rate
  • First-call resolution rate
  • Cost per incident

The Software Advice report is proof that self-service channels are the way to go, right? Well, interestingly, the report references a 2013 Zendesk survey that indicates the majority of consumers would still rather speak to a human than use online self-service channels. It’s important to remind ourselves that we still have a steep hill to climb to convince consumers that calling our support centers should be a last resort. As Millennials and Digital Natives comprise more of the consumer population, this preference for contact with real human support personnel may change. But regardless of how user preferences evolve, our virtual assistant technologies need to continually improve to meet consumer expectations. The only way for us to make sure our technologies are effective is to measure their results, and reports like the one from Software Advice provide insights on how to do just that. 

What I Wish Amazon’s Echo Could Do

Amazon suddenly entered the social robot market last week. Amazon isn’t marketing Echo as a “social robot.” Instead, it’s positioning the device as a smart, voice-activated speaker that’s always connected to the Internet.  Somewhat confusingly, the intelligent assistant within Echo is called Alexa and Alexa is the hot word that activates the Echo.

In the marketing video, Alexa is shown interacting with family members. Alexa is almost portrayed as an addition to the family. Dad can ask her to help when he doesn’t know the answer to his daughter’s homework question, sister can get Alexa to help poke fun, albeit unknowingly, at the pesky brother.

Amazon EchoRight now Alexa’s capabilities seem fairly narrow. The assistant can play music from streaming services you’ve already subscribed to using a bluetooth connection to your smartphone or tablet. It can search Wikipedia and weather data sources to answer general questions or give weather updates. It can play news reports from certain radio stations. The marketing video shows Alexa telling jokes, but it doesn’t say what joke database is being used. It can track to do lists and shopping lists, but it’s not clear if these are Amazon proprietary lists or if Alexa can connect to your current list provider of choice.

Echo appears to provide similar functionality to Ubi. Ubi, however, is an open platform with well-documented APIs and a growing community of developers providing new capabilities for the device. The team at Ubi recently announced the launch of the Ubi Channel on IFTTT (If This then That), a service that let’s developers create functions for connected devices. It remains to be seen what APIs Amazon will publish for Echo and whether they intend to court an open developer community to program features that can augment the Echo and Alexa repertoire.

Connected intelligent assistant devices like Echo, Ubi, and the social robot Jibo are pioneers in a new product category. How successful will they be? It seems to me that their most powerful rival is the smartphone. Will I bother to ask my plugged-in intelligent assistant to convert tablespoons to cups, or will I just look it up on my smartphone or ask Siri or Google Now? Will I ask my plugged-in assistant about the weather, or just check my phone?

If these plugged-in, auditory-only devices can do things my smartphone does AND other things my phone can’t yet do, I might get hooked on using them. To do what my phone does, Alexa would need to read my texts as they come in and let me dictate texts to be sent. It would need to tell me when I get what it knows is an important email and read it to me. It would keep me up to date on social media posts that I care about.  It would tell me if there’s a TV show or a movie or a concert going on that I’d like to watch but don’t know about. It would help me while away the time by reading me articles that I’m interested in or providing some sort of entertainment. Oh, and it would make the occasional phone call.

Now for the things I wish the plugged-in intelligent assistant would do that my smartphone doesn’t. It could ignore the hundreds of promotional emails I get each day, but tell me about the one or two that I’m actually interested in. It would remind me of things that I haven’t even thought to be reminded of, like the fact that my niece has a birthday coming up, that I need to schedule a service appointment for my car, and that I’m about to run out of olive oil. It would tell me if a good friend is feeling down and could use a pick me up call. It would tell interesting stories. It would let me know if the sweet potatoes I’m baking in the oven are done without me having to open the oven door. It would keep me connected with the world by telling me what other people on the planet are doing and thinking and maybe it would even connect me with them if I’m interested. Most importantly, it would subtly inform my friends and family about what I really want for Christmas!

Will Echo’s Alexa, Ubi, or Jibo be able to do any of these other important things that could tear me away from my smartphone? If so, I think they have a bright future.

Boy Meets Girl Chatbot

In February of 2013, I wrote a blog post about how to create a virtual agent / intelligent assistant for your business. At the time, I presented my findings about two virtual agent technology companies that I’d researched: MyCberTwin and Both companies have gone through some changes since I tried them out over a year ago. MyCberTwin was acquired by IBM back in the spring of this year. Chatbots4u is still up and running, but a quick look at their website has me wondering what direction they’re heading in.

Boy Meets Girl ChatbotChatbots4u apparently offers hosting services for business-focused intelligent assistants, but the site’s primary goal seems to be to offer a free platform for the creation and hosting of recreational chatbots. As I noted last year, the Chatbots4u platform seems to be aimed at a younger, international crowd.  At the time, the most popular chatbots were clones of teen idols like Justin Bieber. Bieber still seems to be popular, but the site is now overrun with an explosion of barely clad girlfriend bots. A recent peek led me to believe that the site is no longer G or even PG-rated, if you get my drift. What, I wonder, is the deal with all these girl bots and the penchant of young male botmasters for lewd conversation?

The most well known chatbots have generally been given a female persona. There’s ELIZA and A.L.I.C.E, for starters. Neither of those chatter bot instantiations were sexpots, in terms of their conversational databases. Siri uses a female voice. Cortana is based on the Halo video game character, who clearly is the product of adolescent male fantasies. And then you have Samantha (with Scarlett Johansson’s sultry voiceovers) in Spike Jonze’s movie Her. The disembodied intelligent assistant Samantha exudes female sexuality and uses her charms to arouse and satisfy her male end user.

All of this evidence leads to an obvious question: is it a wide-spread “nerdy guy” fantasy to engage in explicitly adult conversation with a computer software program masquerading as Venus? If we assume that the answer is yes, the next question would be: why? Do nerdy guys lack the gumption to engage a real flesh and blood young woman in conversation? Are the girls of today too unapproachable? Is it really more satisfying to talk to a chatbot that’s restricted to a handful of vapid phrases than to interact with another human being?

The popularity of opposite-sex chatbots probably says something about contemporary society. Exactly what that might be, I can only speculate. Are nerdy guys less lucky with the ladies than their less-nerdy counterparts? And what even classifies as “nerdy” these days? How many teenage boys don’t spend most of their time with their face in some gadget or other? Perhaps for that very reason, boys are more comfortable talking to animations of females on their mobile screens than asking a real girl out on a date.

Is there anything fundamentally wrong with all this? I must admit that the whole chatbot as sexbot thing strikes me as unseemly. I’m probably being prudish though. Adult content, or whatever you want to label it, has been around since human beings could talk. (Well, probably before that). But it’s disappointing if all Moore’s Law and the other exponential advances in computing technology get us are new ways to be morons.

If there’s a problem with the whole female chatter bot scene, in my opinion it would be that the majority of such bots are created by men. Chatbot hotties are programmed to utter what boys think they should say, or want them to say. They don’t speak in the same nuanced way that a real girl would. They don’t get offended, run away, or call the cops. Whatever boys learn by engaging in conversation with chatbots (if anything), they’d best not try a similar approach when speaking with the girls from their school or neighborhood. And they’d better not expect the same outcomes. But guys are smart enough to know the difference between fake girls and real ones, right?

What if women were to design chatbots that gave pubescent boys real, sound advice on how to talk to girls? Would any young men take an interest in this sort of wise advisor chatbot? The advisor could address such topics as: how to find out about a girl’s interests, how to show empathy when talking to a girl, how to make pleasant small talk, proven ice breakers to start the conversation, topics to avoid, what not to say when you’re getting to know a girl, how to give a girl a compliment without offending her or putting her on the spot, and so on. Are inexperienced, nerdy guys receptive to that type of mentoring from a chatbot, or do they just want to get straight to the dirty talk? An experiment to research this question might be interesting.

As intelligent assistants become more prevalent and as they improve their ability to mimic human conversation, what will our expectations be for their behavior? Will we try to shape them to be the dream partners of our fantasies, and will our human relationships suffer as a result? I’d be surprised if studies on just this topic aren’t underway at a university somewhere. In the meantime, we’ll continue to focus on the many innovative people around the globe who have loftier visions for how intelligent assistants can improve our lives.


Natural Language Processing You Can Afford. Yes, You!!

I’m not a linguist. Quite honestly, the vast majority of what LinguaSys is offering (for free for up to 20 API calls a minute and up to 500 calls a month, I might add) in their robust GlobalNLP platform are things I don’t even understand (lemmatization, stemming, and morphological synthesis anyone?). But what I do get is that GlobalNLP offers a ton of extremely useful capability to anyone who needs to process language input to run their applications. In fact, the GlobalNLP platform is built on LinguaSys’s Carabao Linguistic Virtual Machine, offering the same tools and underlying semantic library that’s used by top companies to process language input in a multitude of different languages for a variety of business critical use cases.

GlobalNLPI signed up for the developer platform and tried it out as best I could. Though I’m not a programmer, the API library is super easy to use and it even comes with a testing function. You can actually execute the APIs right from the library without any need for setting up a development platform. The “Open Console” feature allows you to input data into all the API’s parameters and execute the function. The resulting output is published at the bottom of the screen, so you can see exactly what you’d get if you were running the API from your own program.

The GlobalNLP is a full suite of tools. There are APIs for detecting the language, parsing sentences, translating, and more. The site comes with a very thorough Q&A section, lots of helpful documentation, and an online support forum.  Each API also comes with helpful source code examples in a wide variety of popular programming languages.

I put several of the APIs through their paces. The detectlanguage API does a great job at ferreting out the language of text input. I tried some German, French, and Spanish and the API always came back with the correct answer. I even tried to trick is by entering a mix of languages, but it did a good job at determining which one was dominant in the phrase.

GlobalNLPThe parse API is fun to use, as well as the listSenses API, which helps to decipher the words in a search query, enabling you to better understand the user’s intent with the search. The translate function is fun to try out too, although it’s not designed to compete with human translation. Instead, the LinguaSys automated translation is based on a semantic model that is intended to give you a gist of the source.

If you’re developing an app that needs to interpret language input in different languages, or if you want your existing app to go global, you’ll definitely want to explore the possibilities offered by LinguaSys’s GlobalNLP.