Kasisto Intelligent Assistant Targets Banking Industry

FierceFinanceIT recently published an article by Renee Caruthers about Kasisto, the SRI spin-off that provides a virtual assistant targeted at financial institutions. This development is in line with what experts were predicting at this year’s SpeechTek; namely the growing specialization of intelligent assistants and supporting technologies.

KasistoKasisto is built specifically for the banking industry. According to Caruthers’ article, Kasisto can currently support 75 banking use cases and these will be expanded to include additional scenarios based on customer requests. Dror Oren, vice president of product at Kasisto, offered a sample of some of the supported uses cases.

These include answering questions such as the difference between your posted and available balance, asking about a specific transaction that should have posted to your account, or asking how much you spent at one particular store. All of these use cases sound like an expansion of the capabilities currently available in most automated 24/7 banking IVR systems. With Kasisto, the customer will be able to ask questions in a more free form, natural way and get access to the same type of reporting features available from online banking.

Kasisto will also have the ability to perform more advanced services, such as processing transactions. There’s a demo video on the Kasisto website that shows a person using Kasisto to pay his credit card bill. He simply instructs Kasisto to pay his current balance from his checking account. Another supported transaction is asking to dispute a transaction. These types of automated services are already available today, but including them into a seamless intelligent assistant interaction will more closely mimic a true human call agent experience.

Oren is also quoted in the article as saying that Kasisto can support internal banking business functions, such as enabling a banking executive to pass signature limits on to another person when he or she is on vacation.

Intelligent assistants can be much more effective if their universe is narrowly focused on a specific domain. Banking is a good example of a subject area that’s well contained, making it easier for the assistant to understand the context of questions and provide good answers. Kasisto can hand users off to live agents if it gets in trouble, though. Only time will tell if the technology ever improves to the point where people actually prefer speaking to “a machine.”

Jibo vs. the Smartphone and the Future of Conversational Robots

In the current issue of Popular Mechanics, Billy Baker published an interview with Cynthia Breazeal, associate professor at MIT, Founder of the Personal Robots Group at the MIT Media Lab, and Founder and CEO of Jibo. I wrote about the social robot Jibo in an earlier post. If the engaging little robot works as promised, it will be highly conversational.

JiboJibo is targeted to ship one year from now. While Baker’s article doesn’t delve into the specifics of the robot’s current state of development, you get the sense that there’s still a good bit of work to be done before it’s ready to ship. 4,800 units were pre-ordered as part of a very successful Indiegogo campaign. But once the early adopters have their Jibo, what happens next?

Baker titled his article The Jibo Controversy. The controversy Baker refers to is disagreement over whether a product like Jibo is even needed, or if it’s a cute, but frivolous device. Though they aren’t designed to be quite as cute or social, the same controversy swirls around WiFi-enabled smart “home assistants” like the Ubi, EmoSpark, and Amazon’s Echo. Will people want to use these devices? What need do they address or gap in the market do they fill? And the biggest question of all is: will people put down their smartphones long enough to even see that the robot is sitting there waiting on a trigger word?

I met Dr. Roberto Pieraccini, responsible for Conversational Technologies at Jibo, at two technology conferences this year. At one point, I had a chance to speak to him briefly. I asked him the obvious question. How can Jibo compete with the smartphone? We’re all already so tethered to our mobile device, can anything pry us away from it? Pieraccini answered frankly that he didn’t know. We’ll have to wait and see.

During the interview with Baker, Breazeal made it clear her social robot is all about getting people to put away their phones and re-enter the world around them. The mother of three boys, Breazeal often finds her conversations with her children cut short by their addiction to those little screens we all carry around. The current user interface for smart technology is, in Breazeal’s opinion, undermining our relationships with the people around us. To paraphrase Breazeal from Baker’s interview, she says that one of the primary goals of Jibo is to allow people to stay in their life, in the real world, in the moment, instead of having to find their device, enter their passcode to unlock it, and open an app.

Breazeal also dislikes that fact that smartphones are strictly linked to an individual. This exclusion of others from “my device” adds to the force field of isolation they conjure up around us. Jibo, on the other hand, is a family or communal assistant. Everyone in the household can talk to it and it can even help foster communication between family members.

I like Breazeal’s vision for more inclusive conversational assistants that are easily accessible within our normal environment. I don’t think it’s a stretch to predict that our interactions with smart technology are bound to become more seamless. Having to look at or talk to a smartphone is annoying and hopefully a transitory necessity. But will we want to give up our individual assistants in favor of ones shared by the whole family? What if I want to ask my assistant a question I’d rather it didn’t share with my parents? And what if I want my assistant to go everywhere that I go, and not just wait for me at home in the kitchen?

It won’t be long before we’ll find out the answers to these questions. I believe there is a future for smart conversational home assistants. Like Baker, I’m just not sure how that future will take shape. But I hope that Jibo and other like-minded smart machines will become our helpful partners.

IntelliResponse Joins [24]7 for Smarter Virtual Agents

[24]7, a provider of customer support technology, recently acquired IntelliResponse, a top vendor of virtual agent and other web self-service solutions. I did a brief email-based interview with the IntelliResponse team after the acquisition to learn more about how joining forces with [24]7 will strengthen their brand.

IntelliResponse [24]7Prior to the IntelliResponse acquisition, [24]7 didn’t have a virtual agent product. They did have a well-rounded suite of customer support solutions, including dashboards and tools to assist live agents supporting customer interactions via web and mobile chat, voice, and social media. They also had predictive analytics that track customer data and flag patterns to alert call agents about potential reasons customers might be calling. These predictive services are also linked into smart interactive voice response systems to provide customers with a tailored support experience right from their smartphone.

The [24]7 product portfolio, prior to the IntelliResponse acquisition, seemed to be focused on helping live agents and automated systems provide customers with the best support possible. The IntelliResponse virtual agent technology adds a strong self-service component to the [24]7 portfolio. It became clear from our email exchange that the IntelliResponse team sees huge potential in joining their virtual agent capabilities with the predictive analytics that already enable [24]7 solutions to excel at personalized, smart customer support.

So how do predictive customer analytics work? The system gathers information about the customer and uses the data to anticipate what the customer might be calling about and the type of support they need. There’s an online video on the [24]7 website showcasing predictive analytics using the following example:

A traveling consultant who just returned from a foreign country notices that his wireless bill is much higher than normal. When he calls the support line, the system has already flagged this anomaly in the consultant’s account. The system can make an educated guess that the customer is probably calling to get information about these recent high charges to his account.

When the consultant calls, he’s greeted by a pleasant, automated female voice that asks if he’s calling about billing. The automated solution has voice recognition and can understand the customer’s responses. It texts a link to the customer’s smartphone and he can access the link to see details about the charges to his account. He can clearly see roaming charges in a foreign country caused the unusually high amount of his current bill. A live agent can seamlessly engage with him and assist him in adding an international plan to his account.

Predictive analytics seem to be a great match for self-service virtual agent applications. The top goal of self-service systems is to infer the intent of the customer and quickly give them the most accurate answer or set of instructions as possible. Web interactions are complicated. It’s not always easy to understand what the customer really wants when they start typing queries into a search box or a virtual agent’s user interface. But what if you knew where the customer had already been on the website, what actions they’d performed, and you had background information about previous purchases or other account information? And what if you had an analytic engine that could connect the dots to figure out what the customer might be looking for? You could use that information to make the virtual agent look really smart. The days when virtual agents ask “how can I help you?” might soon be a thing of the past. With systems like [24]7 IntelliResponse, they’ll already know the answer.

Looking for a Hammer? Ask the Talking Robot!

If you’re like me, you can never find what you’re looking for in a Lowe’s Home Improvement store. Not only that, but to locate an available clerk who can point you in the right direction is nearly impossible–especially when you’re in a hurry.

Technology has come to the rescue! Lowe’s Innovation Labs and Silicon Valley technology company Fellow Robots have teamed up to build an innovative retail service robot. According to a recent Gizmag article, the robot is called OSHbot and was made for Orchard Supply Hardware (hence the name).

OSHBotOSHbot looks similar to a slim kiosk on wheels. It’s equipped with speech recognition and natural language processing technology and is marginally conversational. In the demonstration video, OSHbot proactively approaches customers as they enter the store and asks them how it can help. Customers can tell the robot what item they’re looking for. Alternatively, they can hold up an item to the robot’s 3D camera and it can use the image to locate the object in its database.

OSHbot is even integrated with the store’s realtime inventory system. It knows where everything is and whether it’s in stock or not. The robot can connect with a live clerk if someone asks it a question it can’t answer. OSHbot uses scanning technology to build a map of the store so that it can navigate the aisles autonomously. All you need to do is follow the bot and presto! You’re standing right in front of the very item you need.

Is OSHbot the first of many speech-enabled service robots? It certainly combines a toolbox of technologies to dramatically improve the customer’s experience in a brick and motor store–especially a place as challenging to navigate as a hardware store. The fun of interacting with a talking robot might even provide an incentive to buy your hammer at a local shop, rather than just ordering it online. Of all the conversational robots I’ve seen, OSHbot is undoubtedly the most practical.

Insights into Inbenta – Providing Artificial Intelligence for the Enterprise

I recently had the opportunity to learn more about Inbenta, a provider of Natural Language Search technology for intelligent assistant and web self-service technologies. I spoke with global marketing director Julie Casson and Kelly Foster, linguist, to gain insight into a company I didn’t know much about. Inbenta originated in Barcelona, and now has offices in the United States, France, Singapore, Brazil and the Netherlands. Casson and Foster are located at the office in Sunnyvale, California.

Inbenta VictoriaPrior to our conversation, I knew that Inbenta offers intelligent assistant technology and an extremely innovative 3D avatar, called Victoria. I’ll talk more about Victoria in a moment. But first, I’ll summarize what I learned about Inbenta’s underlying technology.

I asked Foster what drives the Inbenta intelligent assistant natural language processing engine. It turns out that Inbenta has its own powerful semantic technology that the company has developed and cultivated over many years. The semantic engine runs atop a proprietary lexicon that enables Inbenta’s search and virtual assistant technologies to perform complex natural language processing operations.

As a linguist, Foster knows a lot about how languages work. She explained to me that the Inbenta semantic search engine is based on something called the “Meaning-Text Theory,” which was developed by Igor Melchuk and Aleksandr Zolkovskij. There’s a whole page on the Inbenta website that describes the basics of Meaning-Text Theory. There’s also a description on Wikipedia, which I suppose means it must be real! My wildly oversimplified explanation of the theory (and please don’t quote me on this, in case I’ve got it all wrong) is that all languages are comprised of lexical units, and that these lexical units can generally be categorized into a finite number of lexical functions. Lexical functions are the basic building blocks of language that define semantic relationships between concepts, and that ultimately allow us to use language to create meaning. You can find examples of lexical functions here.

Inbenta has created its comprehensive semantic search engine using the Meaning-Text Theory approach and the lexicon is available in many different languages. The fact that Inbenta doesn’t have to rely on a third party for its natural language processing technology means that it can offer customers a rich feature set, while maintaining its independence and continuing to enhance its product at its own pace.

So how does Inbenta position itself in the web self-service/intelligent assistant market space? Casson says the company is focused on providing businesses with tools to improve customer support. They offer everything from a strong search engine that users can access to find answers to tough questions, to full-blown intelligent assistants. Using these technologies, customers can find answers themselves, freeing up human call center agents to focus on more complex and important customer inquires.

Which brings us back to Victoria. When you visit Inbenta’s website, you’ll see a large white question mark surrounded by a circle. Click on the question mark and Victoria, a remarkably lifelike human avatar, appears. It turns out that Victoria was created in response to a request from Telefonica, a large Spanish broadband and telecommunications provider and major customer of Inbenta. Victoria responds to text input and she can deliver her responses in written or spoken form. Once she is connected to a knowledge base, she can search through information to find responses to customer inquiries. The version of Victoria that’s on Inbenta’s website isn’t plugged into a large knowledge base, so conversations with her are pretty limited. But you can get a general idea of how the technology works. While Victoria’s gestures and motions can be a little distracting at times, the avatar clearly represents an innovative technology with lots of possibilities. It’ll be interesting to see how Inbenta’s 3D avatar evolves and how companies leverage the technology to more effectively engage customers.

As I learned from my discussion with Casson and Foster, Inbenta has a lot of things going for it. Its proprietary semantic engine can drive powerful language processing. Its unique 3D avatar has many possibilities to enhance customer support. Its broad existing customer base, which includes companies such as Schlage, Groupon, and Ticketmaster, give it the experience to implement effective web self-service solutions. Inbenta is certainly a company to consider if you’re in the market for customer-facing intelligent assistant technology.

Intelligent Assistants to be a Focal Point of 2015 Mobile Voice Conference

AVIOS is hosting the Mobile Voice Conference next year in San Jose from April 20-21.  The 2015 Mobile Voice Conference will include topics on enterprise intelligent assistants and supporting technologies. If you’re interested in attending, you can register at a reduced fee from now through December 31st.

Mobile Voice ConferenceFor those who aren’t familiar with it, AVIOS stands for Applied Voice Input/Output Society. AVIOS is an international speech technology applications professional society that has been around since 1981. It includes numerous local chapters where people interested in speech technology have a chance to meet and share ideas. The Mobile Voice Conference is an annual event that AVIOS has sponsored for quite some time in conjunction with TMA Associates.

According to the preliminary program, the 2015 Mobile Voice Conference will be organized into two tracks. Track 1 will focus on applications and use cases, while Track 2 takes a deeper dive into technologies and tools. Some of the proposed sessions in Track 1 include: The evolution of the contact center in a mobile world, creating effective virtual agents, and personal assistants in the enterprise. Track 2 offers sessions such as: text-to-speech status and options, speech recognition technology options and issues, and talking to a computer: a deeper look. There will also be case studies of real world implementations of speech technologies.

If you want to get a flavor for what a Mobile Voice Conference is like, you can take a look at the presentations from the 2014 conference.

Next year’s conference is shaping up to be an extremely informative gathering for any company evaluating enterprise virtual assistants. Whether you’re in the early stages of developing a business case around intelligent assistants, or already deploying these smart technologies to get a competitive edge, check out the preliminary program and grab your discounted registration while it lasts.