Make Your Plans for Mobile Voice Conference 2015

It’s time to start making your plans for AVIOS’s Mobile Voice Conference 2015, scheduled for April 20-21st. The Mobile Voice Conference has a different focus than any other mobile  or speech technology conference out there. This conference zooms into the details of how speech and natural language technology intersect with mobile devices and apps.

Mobile Voice ConferenceThis year, the conference theme is “The Intelligent Connection” and a major focus will be the current state of intelligent assistant / virtual agent technologies. I gave a preview of the Mobile Voice Conference 2015 in a post this past December, which included a link to the draft agenda. The final conference program has now been published and it’s packed full of interesting presentations.

The sessions are divided into two main tracks:

  • Track 1: Applications and Case Studies
  • Track 2: Technologies and Tools

Track 1 is great if you’re looking for ideas on how your company or clients can benefit from mobile voice applications in general, including from enterprise intelligent assistants. Track 2 is perfect for practitioners interested in a deeper dive into voice platforms, natural language and speech recognition tools, and related technologies.

The conference will take place at the Sainte Claire Hotel in San Jose, CA. There are a limited number of discounted rooms available to conference attendees. Reserve yours before they’re all spoken for. You can get more details on the Mobile Voice Conference 2015 official website.

Another Free Chatbot Platform for Kicking the Tires

In the past I’ve written about platforms that enable you to create and operate chatbots at no cost. Some of the platforms I’ve covered include Pandorabots, BOT Libre!, and SecondEgo.

rebot.meAnother free chatbot platform is Rebot.me. I gave Rebot.me a quick spin and found it to be straightforward and easy to use for building a simple chatbot. The platform allows you to create a chatbot and train it by entering questions and corresponding answers. It also offers a chat log so that you can see the results of previous conversations. That’s basically it.

The business applications for a Rebot.me chatbot will be limited. It might work for a bot that is tightly focused on a narrow domain of possible topics. If you a need virtual agent / intelligent assistant that only has to answer a limited range of questions that can be easily anticipated, then the platform might be viable for you.

A good fit for a Rebot.me chatbot might be as the virtual hostess or host on a local restaurant’s website. The chatbot could be trained each day to answer questions about the daily specials, for example. After all, how hard can it be to rattle off an answer to the question: “What are today’s specials?” Of course, you’ll need to remember to update the information every day. A restaurant owner might also want to teach the bot to answer FAQs, such as “Do you have any gluten free choices?” I bet that question comes up more and more these days.

Once you’ve created a virtual agent bot on the Rebot.me platform, your choices for operating that bot are currently limited. There are no APIs for adding your bot to social media sites, for example. But you can easily add the chatbot to your website by copying and pasting a few lines of code. Customers will be able to interact with your chatbot from your website, but Rebot.me will be hosting the bot on its own servers. Right now there is no fee for hosting.

As with Chatbot4u.com and some of the other free platforms, Rebot.me looks to be primarily the playground of hobbyists and, if the bulk of the chatbots are any indication, adolescents. That doesn’t mean that you shouldn’t consider the platform if you’re just starting to experiment with how a virtual agent might add value to your small business.

Before you jump headlong into building a chatbot-based virtual agent and tirelessly typing out question and answer pairs, check out my article on Lessons from Shopping for Virtual Assistants on the Opus Research website. It has some tips that might give you a better overview of the types of solutions available.

Oracle Voice – A Virtual Assistant for Enterprise Software

Oracle recently announced the availability of Oracle Voice, a smartphone-based virtual assistant designed to work with Release 9 of the Oracle Sales Cloud. Enterprise software applications are perfect candidates for virtual assistants. Notoriously complex and cumbersome, enterprise software is a great target for disruption. With the new Voice technology, Oracle seems to be pioneering the use of specialized intelligent assistants to help users more easily navigate specialized business software applications–in this case a Customer Relationship Management (CRM) system.

Oracle Voice AppIn their marketing material, Oracle describes Voice as a “fast, friendly, fun” “Siri-like Assistant.” With the emphasis on “fun,” it seems that the Voice assistant currently has a limited range of capabilities. Designed for sales reps, the Voice app helps reps speed up the process of preparing for and wrapping up sales meetings. According to a blog post from 2014, Oracle partnered with Nuance for the speech recognition software employed within Oracle Voice.

Sales reps can use the Voice assistant to post key insights into the CRM system as soon as they come out of meetings. They can also use the assistant to enter new contacts into the system. The reps use natural language dialog to make their notes. If speech interaction isn’t their preferred method of making updates to the Oracle Sales Cloud application, they can switch to a touch-and-type interface.

To improve the performance of the Voice assistant, Oracle has included many product and industry specific vocabulary items in the Voice assistant’s knowledge base. Siri doesn’t need to recognize words like “Exadata” and “Exalytics,” but Oracle Voice does.

Voice also helps reps update opportunities and add tasks. The Voice assistant prompts the rep for key details. For example, when the rep wants to create a new task, the assistant asks for the due date. Sales reps can also prepare for meetings by using voice commands to access notes, activities, and sales information.

Oracle Voice for Oracle Sales Cloud is one of the first application-specific assistants that I’ve seen in the enterprise software space. SAP offers voice-enabled commands for specific logistics functions and I’m sure there are others. I anticipate that we’ll see many more such enterprise software assistants in the coming years. To see this assistant in action, watch the Oracle Voice demo video.

 

Can Micro-Moments Bring Us Closer to Our Personal Intelligent Assistants?

What are micro-moments and why do intelligent assistants need them? Last fall Jeffrey Hammond of Forrester Research wrote an article about how consumer engagement is shifting toward what he labels “micro-moments.” Hammond describes micro-moments as unprompted alerts or nudges from mobile apps that last about 5-10 seconds. Micro-moments result in brief interactions that engage the user periodically throughout the day.

Micro-momentHammond states that both Apple and Google have recently opened iOS and Android so that app developers can access the unique platform services of mobile devices. This allows developers to shift away from apps that require users to intentionally access them a few times a day, to apps that send notifications or support cross-device interactions.

What does this have to do with intelligent assistants? There are two main types of digital personal assistants out there these days. The first type of assistant waits for you to talk to it and ask it a question. The second type works proactively in the background to determine what information you need and provides it without being prompted. Siri is the classic example of the passive assistant, while Google Now and EasilyDo are examples of smart digital assistants that offer up information without being prompted.

I’m not making a judgement about which type of assistant is better. Each has its place. Being able to ask a specific question is a good thing. Getting information about your schedule or surroundings without having to ask is good too. But it seems to me that micro-moments will become increasingly important for personal intelligent assistants.

The future of these smart, conversational  technologies involves being networked with other connected devices, like our fitness wearables, our connected home appliances, our streaming music services, and many other data sources. The personal digital assistant should be an orchestrator of these services. It should also filter through the data gushing in from connected devices and, ideally, connect the dots to summarize what we need to know.

What if our assistant knows that we haven’t gotten much exercise in all day, that the weather is deteriorating, and that we’ve been invited to dinner by friends? It could notify us that if we leave now, we can get a mile-long walk in before it starts to rain and return in time to get back and be ready for our dinner invitation. It could even suggest that we pick up a bottle of wine for dinner at the corner store down the road. Maybe it’ll even remember what kind of wine our friends prefer? This whole recommendation could come as a micro-moment prompt that we can choose to pay attention to or ignore.

There’s certainly a risk that micro-moments could become bothersome. An intelligent assistant that sends out lots of notifications might get on our nerves, especially if we’re not interested in what it has to tell us. But an assistant that remains in the background runs an equally big risk of becoming a wallflower. And as we all know, its tough to build a satisfying relationship with someone who always waits on us to start the conversation.

Robotbase is Building the Personal Robot of the Future: Pre-orders Available Now

As reported by Techcrunch, Robotbase has launched its Kickstarter campaign to presell its artificially intelligent personal robot. CEO Duy Huynh was at CES this past week pitching the company and showing off the prototype version of the robot Maya to Techcrunch reporters and others. You can launch the pitch and demo session from within the Techcrunch article.

RobotbaseRobotbase seems to be referring to the product as the “Personal Robot.” Once you purchase the robot, you can call it by whatever name you like. I see a lot of similarities between Jibo, the social robot, and the Robotbase robot. The obvious major difference is that the Personal Robot is mobile, being built atop a platform with wheels. It comes with software that enables it to scan a room and make a map that it can use to autonomously navigate around obstacles.

If you look past its ability to move around a room, the Personal Robot is set to have many features reminiscent of Jibo’s proposed capabilities list. It is supposed to connect to and control your in-home connected devices, recognize faces and other objects, understand speech input, get information from the cloud, act as group photographer, and generally perform the activities of a personal assistant.

During the CES demo, CEO Huynh also demoed the Personal Robot’s abilities as a retail assistant. That’s very similar to the hardware store robot OSHbot that I wrote about last month. Robotbase’s robot is able to understand a customer question, provide an answer, and lead the customer to the location of the desired shopping item if needed (just like OSHbot). It strikes me that this retail use case might be an easier one to succeed at than the broader personal assistant use case. In a retail setting, the robot knows that the vast majority of questions will be about store merchandise. A personal home robot, on the other hand, will need to be able to anticipate and correctly react to a whole host of possible topics and conversational items.

Both Jibo and Robotbase’s Personal Robot promise a lot and they’re both still under development. In the CES demo, Huynh talks about the company’s deep learning algorithms. In fact, it’s this software technology that Huynh lauds as Robotbase’s most significant achievement. These algorithms are intended to give the Personal Robot the ability to get smarter over time based on interactions (unsupervised learning).

I like the concept of Robotbase’s Personal Robot and hope to see the company succeed. It does seem to me that delivering fully on the demonstrated robot capabilities will be tough. In the demo videos, the Personal Robot speaks with a human voice, using natural intonation. It even reads a child’s story, stressing the right words and giving the story emotion by using its voice. Mimicking this type of human intonation is surprisingly difficult for an automated text-to-speech program. It might work marginally well for canned responses, but it would be hard to accomplish for output that’s variable and created on the fly.

A Personal Robot that follows us around everywhere we go in our home would also need to be sensitive to the context we’re in at any given time. It would need to have a good sense for when we want to be interrupted with information and when we don’t. Even in the Kickstarter video, I get the sense that the Personal Robot could become annoying. If it’s with me in the kitchen, I don’t want it bugging me every couple of minutes asking if I need help with a recipe. I also don’t necessarily want it waking me up or making assumptions about how I slept. And I’m not sure I want it ordering lunch for me without checking to see what I want first, even if it knows what I usually get.

I’m assuming you can control these behaviors and that the robot will get to know your preferences over time. But getting all that right in the software is bound to be a challenge. I have confidence that at some point in the future, the vision of truly effective, unobtrusively helpful personal robots will be a reality. Let’s hope that future is right around the corner.

Tips for Evaluating Intelligent Assistants for Your Business

Today the team at Opus Research published my article “Lessons from Shopping for Intelligent Assistants.” In the article, I recount my recent experiences assisting an online retail business with an evaluation of commercial virtual assistant solutions.

Screen shot 2015-01-08 at 6.49.33 PMI learned a lot during the evaluation process. One of the lessons was that there is great technology in the marketplace today that can really help a business manage their incoming inquiries. In the case of the business I was helping, their incoming inquiries were originating primarily from resellers and sales representatives and were expected to increase in volume due to expansion plans.

I was surprised by how some of the technology works. I started out thinking that a truly conversational virtual agent would be the best for the company. But once I started evaluating commercial solutions, I found that intelligent assistant vendors who focused more on intelligent search, rather than conversation, might be better suited to this particular company’s needs. I also learned a lot about the costs and extended benefits of intelligent assistant solutions and how you can prepare your company to take advantage of what these solutions offer.

You can read the full article on the Opus Research website. Opus Research sponsored the first annual Intelligent Assistants Conference in San Francisco last fall. Stay tuned for upcoming announcements about Opus Research’s Intelligent Assistants Conference in 2015.

Predictions on the Evolution of Intelligent Assistants

NEural networkPeter Sweeney, entrepreneur, technologist, and founder of Primal.com, recently published an article on Medium called Siri’s Descendants: How Intelligent Assistants Will Evolve. In the article, Sweeney does a great job at contrasting how intelligent assistants are being positioned today versus what types of use cases he believes IAs are better suited for in the future.

Sweeney uses some very imaginative and compelling graphics to depict his predicted evolution of IAs from today to tomorrow. The key areas that Sweeney sees as ripe for evolution are:

  • Natural language interfaces vs. contextual / associative communication
  • IAs as generalists vs. IAs as specialists and domain experts
  • IAs as isolated with limited autonomy vs. IAs as networked with expansive autonomy

On the question of how we interface with IAs, Sweeney believes the preponderance of today’s IAs require the user to interact with them via natural language. But Sweeney points out that you can learn much more about people, their habits and needs, by silently observing their behaviors in context. Though he doesn’t provide these examples, Sweeney’s projections for the future would give the edge to context aware assistants that work in the background, such as Google Now and EasilyDo, over voice-prompted assistants like Siri and SpeakToit.

In the area of what the IA knows and can do, Sweeney’s graphic shows that today’s IAs are overwhelmingly focused on trying to cover very broad topic areas. You can ask Siri anything and she can give you some sort of answer, even if it’s only pulling up distantly connected search hits. But Sweeney thinks the sweet spot for IAs in the future will be specialization, with IAs having in depth knowledge of targeted domains. This specific knowledge will enable assistants to execute the tasks that we want them to perform for us. Sweeney calls today’s IAs “trivia buffs.” He sees a huge opportunity for IAs that have enough insight to carry out tasks, such as helping us navigate to work. Another example that I’ve been thinking of lately: helping us plan our meals out for a week, including putting together recipes that we’ll like and that meet our dietary goals, and automatically drafting the associated shopping list. That sort of help sure would save time.

Lastly, Sweeney sees today’s IAs as being too solitary. They are locked inside our smartphones. In the future, specialized IAs could exist within websites or connected devices and they could communicate with each other to perform higher level services for us. Since each IA is focused on a narrow domain, Sweeney suggests that we might be comfortable allowing them more autonomy. An example I have might be our willingness to permit a book shopping IA to choose and purchase (up to a dollar threshold) books it has a high confidence level we’ll enjoy. And it could even know to make the purchase only after we’ve run out of good things to read.

Sweeney makes some insightful points in his article and his supporting graphics are great. As always, it’ll be interesting to watch how the marketplace evolves. We’ll see if IAs move in the direction that Sweeney predicts.