At the recent SpeechTek 2014 event, I had an opportunity to speak with Brian Garr, Chief Executive Officer of LinguaSys, a very interesting company in the Natural Language Understanding space. The prevalence of speech-enabled applications and devices has increased exponentially in the past five years. We can talk to our smartphones, our cars, and even our home appliances. Soon we’ll be conversing with social robots like Ubi and Jibo. Speech recognition technology has made vast improvements over the years. We’re also used to typing in text when we want a search engine, an app, or an intelligent assistant to answer a question or help us complete a transaction. But what about natural language understanding technology? All of this incoming language, whether it be spoken or typed, has to be interpreted and understood before we can get back the answers we need.
Our intelligent assistants seem to understand us pretty well when we ask simple questions about the weather or fact-based questions like “what’s the capital of Wyoming?” But can they understand more complex statements? And can they understand them when we use different languages? LinguaSys is a niche player with a unique and very powerful offering that can make intelligent assistants smarter at understanding what we say. In fact, the LinguaSys technology powers many of the smart applications we use today that involve natural language input.
In talking with Garr about the LinguaSys technology, I learned that they have the keys to a veritable gold mine. The gold mine is a proprietary treasure trove of word meanings and semantic relationships that spans thousands of concepts and over 18 languages. The LinguaSys semantic network was built up over years, during which it offered machine translation software. The company’s products still include machine translation, but the same basic technology now enables the seamless translation and understanding of a huge range of possible conversational inputs. How does this work? In the LinguaSys database, word meanings, concepts, and relationships are stored in language neutral, symbolic format. That means the word “rainbow” has the same symbol no matter if the concept is uttered in Japanese, Urdu, or English.
The use case example that Garr used during our discussions was of someone wanting to make a reservation at a hotel that would also accommodate their poodle. A speech recognition engine can probably do a good job at translating the sounds into the right words. But what are the chances that it’ll know that a poodle is dog, which is a domesticated animal, also known as a pet? This type of conceptual understanding is embedded in the LinguaSys system. It would take a monumental amount of work to establish your own comprehensive semantic model to enable you to extract this type of understanding. You might be able to leverage something like Freebase for some applications. But then what happens when you need to start supporting other languages?
The Carabao Linguistic Virtual Machine, as the product offering is called, can basically be plugged into your application to give it an NLU boost. If you leverage the Carabao Linguistic VM for your hotel booking or general reservation system, the system will understand that when someone refers to their poodle, they’re looking for a pet-friendly accommodation.
Garr refers to the LinguaSys products as middleware. You can access the solution via the cloud or from your own on-premise deployment. Based on my understanding of the product set, they can be readily integrated into new or existing applications using industry standard protocols.
I don’t know what the pricing model is for access to the LinguaSys middleware. The solution may not be affordable for smaller companies or independent botmaster types, but I don’t know that for sure. If your product or technology depends on being able to correctly understand language input, and especially if you’re challenged with accepting input in multiple languages, this is a product you’ll likely want to explore.