Robotbase is Building the Personal Robot of the Future: Pre-orders Available Now

As reported by Techcrunch, Robotbase has launched its Kickstarter campaign to presell its artificially intelligent personal robot. CEO Duy Huynh was at CES this past week pitching the company and showing off the prototype version of the robot Maya to Techcrunch reporters and others. You can launch the pitch and demo session from within the Techcrunch article.

RobotbaseRobotbase seems to be referring to the product as the “Personal Robot.” Once you purchase the robot, you can call it by whatever name you like. I see a lot of similarities between Jibo, the social robot, and the Robotbase robot. The obvious major difference is that the Personal Robot is mobile, being built atop a platform with wheels. It comes with software that enables it to scan a room and make a map that it can use to autonomously navigate around obstacles.

If you look past its ability to move around a room, the Personal Robot is set to have many features reminiscent of Jibo’s proposed capabilities list. It is supposed to connect to and control your in-home connected devices, recognize faces and other objects, understand speech input, get information from the cloud, act as group photographer, and generally perform the activities of a personal assistant.

During the CES demo, CEO Huynh also demoed the Personal Robot’s abilities as a retail assistant. That’s very similar to the hardware store robot OSHbot that I wrote about last month. Robotbase’s robot is able to understand a customer question, provide an answer, and lead the customer to the location of the desired shopping item if needed (just like OSHbot). It strikes me that this retail use case might be an easier one to succeed at than the broader personal assistant use case. In a retail setting, the robot knows that the vast majority of questions will be about store merchandise. A personal home robot, on the other hand, will need to be able to anticipate and correctly react to a whole host of possible topics and conversational items.

Both Jibo and Robotbase’s Personal Robot promise a lot and they’re both still under development. In the CES demo, Huynh talks about the company’s deep learning algorithms. In fact, it’s this software technology that Huynh lauds as Robotbase’s most significant achievement. These algorithms are intended to give the Personal Robot the ability to get smarter over time based on interactions (unsupervised learning).

I like the concept of Robotbase’s Personal Robot and hope to see the company succeed. It does seem to me that delivering fully on the demonstrated robot capabilities will be tough. In the demo videos, the Personal Robot speaks with a human voice, using natural intonation. It even reads a child’s story, stressing the right words and giving the story emotion by using its voice. Mimicking this type of human intonation is surprisingly difficult for an automated text-to-speech program. It might work marginally well for canned responses, but it would be hard to accomplish for output that’s variable and created on the fly.

A Personal Robot that follows us around everywhere we go in our home would also need to be sensitive to the context we’re in at any given time. It would need to have a good sense for when we want to be interrupted with information and when we don’t. Even in the Kickstarter video, I get the sense that the Personal Robot could become annoying. If it’s with me in the kitchen, I don’t want it bugging me every couple of minutes asking if I need help with a recipe. I also don’t necessarily want it waking me up or making assumptions about how I slept. And I’m not sure I want it ordering lunch for me without checking to see what I want first, even if it knows what I usually get.

I’m assuming you can control these behaviors and that the robot will get to know your preferences over time. But getting all that right in the software is bound to be a challenge. I have confidence that at some point in the future, the vision of truly effective, unobtrusively helpful personal robots will be a reality. Let’s hope that future is right around the corner.

Jibo vs. the Smartphone and the Future of Conversational Robots

In the current issue of Popular Mechanics, Billy Baker published an interview with Cynthia Breazeal, associate professor at MIT, Founder of the Personal Robots Group at the MIT Media Lab, and Founder and CEO of Jibo. I wrote about the social robot Jibo in an earlier post. If the engaging little robot works as promised, it will be highly conversational.

JiboJibo is targeted to ship one year from now. While Baker’s article doesn’t delve into the specifics of the robot’s current state of development, you get the sense that there’s still a good bit of work to be done before it’s ready to ship. 4,800 units were pre-ordered as part of a very successful Indiegogo campaign. But once the early adopters have their Jibo, what happens next?

Baker titled his article The Jibo Controversy. The controversy Baker refers to is disagreement over whether a product like Jibo is even needed, or if it’s a cute, but frivolous device. Though they aren’t designed to be quite as cute or social, the same controversy swirls around WiFi-enabled smart “home assistants” like the Ubi, EmoSpark, and Amazon’s Echo. Will people want to use these devices? What need do they address or gap in the market do they fill? And the biggest question of all is: will people put down their smartphones long enough to even see that the robot is sitting there waiting on a trigger word?

I met Dr. Roberto Pieraccini, responsible for Conversational Technologies at Jibo, at two technology conferences this year. At one point, I had a chance to speak to him briefly. I asked him the obvious question. How can Jibo compete with the smartphone? We’re all already so tethered to our mobile device, can anything pry us away from it? Pieraccini answered frankly that he didn’t know. We’ll have to wait and see.

During the interview with Baker, Breazeal made it clear her social robot is all about getting people to put away their phones and re-enter the world around them. The mother of three boys, Breazeal often finds her conversations with her children cut short by their addiction to those little screens we all carry around. The current user interface for smart technology is, in Breazeal’s opinion, undermining our relationships with the people around us. To paraphrase Breazeal from Baker’s interview, she says that one of the primary goals of Jibo is to allow people to stay in their life, in the real world, in the moment, instead of having to find their device, enter their passcode to unlock it, and open an app.

Breazeal also dislikes that fact that smartphones are strictly linked to an individual. This exclusion of others from “my device” adds to the force field of isolation they conjure up around us. Jibo, on the other hand, is a family or communal assistant. Everyone in the household can talk to it and it can even help foster communication between family members.

I like Breazeal’s vision for more inclusive conversational assistants that are easily accessible within our normal environment. I don’t think it’s a stretch to predict that our interactions with smart technology are bound to become more seamless. Having to look at or talk to a smartphone is annoying and hopefully a transitory necessity. But will we want to give up our individual assistants in favor of ones shared by the whole family? What if I want to ask my assistant a question I’d rather it didn’t share with my parents? And what if I want my assistant to go everywhere that I go, and not just wait for me at home in the kitchen?

It won’t be long before we’ll find out the answers to these questions. I believe there is a future for smart conversational home assistants. Like Baker, I’m just not sure how that future will take shape. But I hope that Jibo and other like-minded smart machines will become our helpful partners.

Looking for a Hammer? Ask the Talking Robot!

If you’re like me, you can never find what you’re looking for in a Lowe’s Home Improvement store. Not only that, but to locate an available clerk who can point you in the right direction is nearly impossible–especially when you’re in a hurry.

Technology has come to the rescue! Lowe’s Innovation Labs and Silicon Valley technology company Fellow Robots have teamed up to build an innovative retail service robot. According to a recent Gizmag article, the robot is called OSHbot and was made for Orchard Supply Hardware (hence the name).

OSHBotOSHbot looks similar to a slim kiosk on wheels. It’s equipped with speech recognition and natural language processing technology and is marginally conversational. In the demonstration video, OSHbot proactively approaches customers as they enter the store and asks them how it can help. Customers can tell the robot what item they’re looking for. Alternatively, they can hold up an item to the robot’s 3D camera and it can use the image to locate the object in its database.

OSHbot is even integrated with the store’s realtime inventory system. It knows where everything is and whether it’s in stock or not. The robot can connect with a live clerk if someone asks it a question it can’t answer. OSHbot uses scanning technology to build a map of the store so that it can navigate the aisles autonomously. All you need to do is follow the bot and presto! You’re standing right in front of the very item you need.

Is OSHbot the first of many speech-enabled service robots? It certainly combines a toolbox of technologies to dramatically improve the customer’s experience in a brick and motor store–especially a place as challenging to navigate as a hardware store. The fun of interacting with a talking robot might even provide an incentive to buy your hammer at a local shop, rather than just ordering it online. Of all the conversational robots I’ve seen, OSHbot is undoubtedly the most practical.

What I Wish Amazon’s Echo Could Do

Amazon suddenly entered the social robot market last week. Amazon isn’t marketing Echo as a “social robot.” Instead, it’s positioning the device as a smart, voice-activated speaker that’s always connected to the Internet.  Somewhat confusingly, the intelligent assistant within Echo is called Alexa and Alexa is the hot word that activates the Echo.

In the marketing video, Alexa is shown interacting with family members. Alexa is almost portrayed as an addition to the family. Dad can ask her to help when he doesn’t know the answer to his daughter’s homework question, sister can get Alexa to help poke fun, albeit unknowingly, at the pesky brother.

Amazon EchoRight now Alexa’s capabilities seem fairly narrow. The assistant can play music from streaming services you’ve already subscribed to using a bluetooth connection to your smartphone or tablet. It can search Wikipedia and weather data sources to answer general questions or give weather updates. It can play news reports from certain radio stations. The marketing video shows Alexa telling jokes, but it doesn’t say what joke database is being used. It can track to do lists and shopping lists, but it’s not clear if these are Amazon proprietary lists or if Alexa can connect to your current list provider of choice.

Echo appears to provide similar functionality to Ubi. Ubi, however, is an open platform with well-documented APIs and a growing community of developers providing new capabilities for the device. The team at Ubi recently announced the launch of the Ubi Channel on IFTTT (If This then That), a service that let’s developers create functions for connected devices. It remains to be seen what APIs Amazon will publish for Echo and whether they intend to court an open developer community to program features that can augment the Echo and Alexa repertoire.

Connected intelligent assistant devices like Echo, Ubi, and the social robot Jibo are pioneers in a new product category. How successful will they be? It seems to me that their most powerful rival is the smartphone. Will I bother to ask my plugged-in intelligent assistant to convert tablespoons to cups, or will I just look it up on my smartphone or ask Siri or Google Now? Will I ask my plugged-in assistant about the weather, or just check my phone?

If these plugged-in, auditory-only devices can do things my smartphone does AND other things my phone can’t yet do, I might get hooked on using them. To do what my phone does, Alexa would need to read my texts as they come in and let me dictate texts to be sent. It would need to tell me when I get what it knows is an important email and read it to me. It would keep me up to date on social media posts that I care about.  It would tell me if there’s a TV show or a movie or a concert going on that I’d like to watch but don’t know about. It would help me while away the time by reading me articles that I’m interested in or providing some sort of entertainment. Oh, and it would make the occasional phone call.

Now for the things I wish the plugged-in intelligent assistant would do that my smartphone doesn’t. It could ignore the hundreds of promotional emails I get each day, but tell me about the one or two that I’m actually interested in. It would remind me of things that I haven’t even thought to be reminded of, like the fact that my niece has a birthday coming up, that I need to schedule a service appointment for my car, and that I’m about to run out of olive oil. It would tell me if a good friend is feeling down and could use a pick me up call. It would tell interesting stories. It would let me know if the sweet potatoes I’m baking in the oven are done without me having to open the oven door. It would keep me connected with the world by telling me what other people on the planet are doing and thinking and maybe it would even connect me with them if I’m interested. Most importantly, it would subtly inform my friends and family about what I really want for Christmas!

Will Echo’s Alexa, Ubi, or Jibo be able to do any of these other important things that could tear me away from my smartphone? If so, I think they have a bright future.

Social Robots on the Move

MIT Robotics professor Cynthia Breazeal recently launched JIBO, a social robot, via a Kickstarter campaign. Like the social AI cube EmoSpark, JIBO seems to have struck a chord with crowdfunding audiences. With 28 days still to go, the JIBO project has raised $656K, far exceeding its funding goal.

JIBOSo what is it that people find so compelling about social robots? Some folks are probably intrigued by the prospect of having a robot companion that brings to life the type of intelligent robot friends they’re familiar with from science fiction. Others might just find JIBO’s cute robot personality attractive and want one for their own.

JIBO is being pitched as a robot for the family. It has facial recognition software that allows it to tell who people are and to speak to them by name and deliver messages that are meant for them. It’s also supposed to be able to tell stories to children. JIBO is equipped with a round screen for a face and it can change its expression or display graphics to emphasize what it’s saying.

Based on an interview with Dr. Breazeal, it sounds like JIBO has limited conversational ability at this point. It’s programmed to recognize certain commands and carry out specific functions. The idea is that JIBO’s capabilities will expand over time. A Software Developer Kit (SDK) will allow programmers to write new code that accesses JIBO’s sensory systems and brain and add new features to JIBO. I suppose content creators might be able to use the SDK to add new stories to JIBO’s storytelling repertoire too, but that’s not spelled out explicitly.

EmoSpark, which was launched earlier this year on its own successful Indiegogo campaign, isn’t nearly as cute as JIBO. EmoSpark looks more like a funky clock-radio. It’s a talking cube. Like JIBO, it works via a WiFi connection. But based on the demos, EmoSpark is aiming to be somewhat more conversational. It will have access to Freebase in order to respond to fact-based questions. EmoSpark is also being pitched as an emotionally aware robot.

At the present time, though, neither EmoSpark nor JIBO have actually shipped. Both are still in the design and/or production phase. So it remains to be seen what features, and how much real conversational ability, will make it into the shipped versions of both social robots. There’s no doubt that social robots are creating a lot of buzz, though. And the market for these products, even if they’re more like toys than robots at this point, seems strong.

New Android Robots Unveiled at Tokyo Museum

Just on the heels of the announcement of Pepper, the talking robot offered by Softbank and Aldebaran Robotics, Tokyo’s National Museum of Emerging Science and Innovation unveiled two humanoid robots that are “staffing” the museum. One of the robots, called Kodomoroid, appears to just read from scripts. The second robot, known as Otonaroid, seems to be more of a conversational robot that can engage in spontaneous dialog. The robots were designed by Hiroshi Ishiguro of Osaka University.

AndroidThere’s a video of the news conference where the two humanoid robots were unveiled. It’s difficult to judge from the video just how capable of a conversationalist Otonaroid might be. Based on Ishiguro’s other work with androids, it seems that his focus is more on the appearance and movements of robots rather than on their conversational abilities.

Otonaroid and Kodomoroid are reminiscent of the virtual human twins Ada and Grace–named after Ada Lovelace and Grace Hopper–that answer visitor questions at the Museum of Science in Boston. I wrote about the twins in an earlier post that described the work of USC’s Institute for Creative Technologies (ICT). The ICT has been creating virtual humans for well over a decade and developing a framework of technologies to support all of the capabilities that a virtual human needs to be convincing, including conversational speech.

If I had to choose who to interact with, the Otonaroid and Kodomoroid robots or Ada and Grace, I’d probably pick Ada and Grace. Talking to a virtual representation of a human seems less creepy than interacting with a doll-like robot that’s supposed to very closely mimic human appearance and behavior.  Ada and Grace are able to talk to visitors and answer questions, but I’m not sure if their conversational abilities far surpass those of Otonaroid. We’ll have to await more evidence to make a judgement.

What will win out in the future: virtual humans or physical androids? I suppose there will be a role for both types of artificial intelligent companions and assistants. But both will certainly need conversational abilities if they’re to have enduring success in the marketplace.

New Conversational Robot Unveiled by Softbank and Aldebaran

Pepper RobotSoftbank Mobile, the Japanese wireless carrier, and Aldebaran Robotics, a robotics company headquartered in France, announced a partnership to build and distribute an intelligent robot called Pepper. The announcement was made with a lot of fanfare at a Softbank event in Japan. Bruno Maisonnier, Aldebaran’s Founder & CEO, describes Pepper as an emotional robot. Like Aldebaran’s smaller robot NAO, Pepper is equipped with sensors and software that enable it to detect human emotions through both visual and voice cues. Pepper itself is designed to appear friendly and non-threatening and to evoke emotions of happiness and ease in those who interact with it. I wrote about NAO in an earlier post.

You can watch a webcast of the entire press event where Pepper is presented to a live audience. Most of the webcast is in Japanese and the dubbed over translation is a bit awkward to listen to. If you fast forward to about Minute 36:00, you can watch Maisonnier’s introduction of Pepper in English (not dubbed over). It’s difficult to tell from the demo how conversational the current version of Pepper really is. According to Yuri Kageyama of the Associated Press who covered the live demo, Pepper looks good but displayed serious limitations. The robot’s voice recognition system appears to need some improvements, and its conversational abilities seem fairly rudimentary.

Maisonnier talks about an ecosystem and set of APIs for Pepper that will allow developers to create third-party apps for the robot. He specially mentions a physical “atelier” where developers can get together in person and collaborate on coding projects. The Aldebaran website currently supports a store where NAO owners can download apps to augment the skills of their robots. There’s also an Aldebaran developer community that you can register for and take part in. It would be great if there was a way for chatbot scripters, or others with an aptitude for creating conversational stories, to package dialog or story content and make it available to run on the NAO and Pepper platforms.

Will Pepper acquire truly compelling conversational skills? That remains to be seen. But unless it can carry on a consistent conversation with us, or entertain us with engaging stories, it seems unlikely that Pepper will become the breakthrough technology that Softbank and Aldebaran claim it to be.