2018 is going to be an exciting year to witness the start of a huge leap into the area of conversational AI. Josefine Fouarge takes a look into how it has developed so far and where it's going very soon.
For years we have been trained on how to interact with machines – how to use a mouse, what to click for a specific action, and maybe even how to write code in a variety of languages. But talking, gestures, or facial expressions are natural ways for us to communicate. Machines that can understand these nuances have only been subject to Hollywood interpretation so far.
“So far” are the key words here. Technology has evolved in a way that it can interpret the human language and draw a conclusion based on what was said or texted. The complex part here is not just the algorithm, though; it’s the ability to combine phonemes to words for speech recognition, letters to words for text recognition, and either one of them to meaning – and then react based on that. 2018 is going to be an exciting year to witness the start of a huge leap in that area, because today’s technology is already capable of engaging with humans in a conversational way.
Where do we start?
Where do we see conversational interfaces? Chatbots and virtual assistants are probably the most known example. Used in customer service scenarios, conversational interfaces can do a lot of things already. They can react to very specific scenarios like resetting a password, updating the address or helping with selecting a specific product. Usually, these can be found on a brand’s website, in their messaging and social channels and even in the IVR.
If you have used a smart speaker like the Amazon Echo before, then you have dealt with a machine that interprets your words into meaning for itself. For example, when you ask Alexa to play music it will analyze your request and then, as a result, start to play some tunes. Have you ever called a brand and it told you to simply ask a question instead of pressing “1”? That’s basically a virtual assistant with speech recognition.
What’s the next step?
There is a variety of conversational interfaces available, for example the ones that provide a list of items from which a user can pick; others react on specific keywords and can be used as a simple Q&A. The next step of these rather “simple-minded” versions is a conversational interface that is capable of handling all sorts of conversations, back and forth, without the need for human intervention. Today’s “state of the art” virtual assistant can disambiguate without a pick list, just by asking for the missing information.
That’s the goal.
The final, so far unsolved, stage is actual complex interactions. Something that could simulate a heated discussion, a brainstorming with a colleague, etc. Things that require a lot of external data or background information that influence the conversation. These are the areas on which Nuance is working, bringing automated conversations from a simple back and forth to an actual conversational tool that will allow you to augment your life.
To give you an idea of how this future could look, watch our vision of next generation omni-channel customer engagement.