Part 3 – AI for customer care: Using Machine Learning to solve customer requests

In the final installment of his blog series, Nils Lenke discusses how we’re applying “big knowledge” for customer service tasks and moving virtual agents towards state-of-the-art AI learning.
By
How you can use machine learning and natural language methods to accurately answer customer service questions

If you’ve been reading my series, you know that AI and machine learning (ML) can have a powerful impact on delivering the best possible customer care experience.  Specifically, we’re applying “big knowledge” for customer service tasks. What does this mean?

The first task we want to look at is “passage retrieval,” or finding relevant text parts that contain the answer to a question. It helps to solve the “simple” intents in the customer service application we mentioned above, where customers ask for something that is (hopefully) contained in the document data base. And instead of searching for words only, and hoping that customer question and target document use the same language, we will apply what we learned in the last part of this post.

machine-learning-and-natural-language-dependency-trees

 

As the diagram shows, the trick is that we run both the database of documents and the question through the Natural Language (NL) pipeline and generate enhanced dependency trees. The former is done offline to compile an index of such trees and the question is processed at run time. The best matching trees are selected as the answer candidates and the text passages are then ranked, and the best candidate reads back to the customer. When we tested this at a customer we found that it worked much better than their legacy search tool, which was based on traditional word-level search. That worked reasonably well when customers used an appropriate keyword (it would find the right answer in 84% of the cases) but it degraded a lot when people were using full natural language queries (54% success). Our new solution scored 96% and 81% respectively.

Similarly, we are now using this for another typical ML task, “clustering.” As I mentioned above, when customers contact the agents they may have one of several different “intents” or buckets of tasks in mind. How do we know which “buckets” exist? Of course you can do a manual clustering-tasks-machine-learninganalysis, which may be very time consuming. Instead, you can also use ML with methods that try to find “clusters” of things that look similar compared to the rest of the data. In our case, imagine you look at 100,000 requests that came in, can you find 100 or 200 or 500 “buckets” that can be mapped to request types or intents? If you do this automatically the additional benefit is that you can monitor how requests change over time, as what your customers want from you may also change over time. The naive approach is to apply standard machine learning clustering approaches on the initial requests that customers have, at world level. But given what we learned in this blog series we can improve over this in two ways: First, we will not only use the initial user utterance. Instead as we can observe the entire interaction, like what the human (and automatic) agents actually do with the request, we should take that entire interaction into account when clustering. And secondly, again we will use our NLU pipeline to transform mere words into semantically enriched trees and runs the clustering algorithms in these.

Both of these approaches again allow us to reduce annotation time before a technology is put in use, and allow us to take advantage of the unstructured data that so many enterprises have readily available. The virtual assistant is in essence not only doing something useful for the end user, but also helping to translate a company’s unstructured data into exactly the kind of labeled big data that will allow the virtual agent to move towards state-of-the-art AI learning.

So, big knowledge means big changes coming to customer care through human assisted virtual agents (HAVAs). With the right methods in place, they can drive a more collaborative engagement between humans and machines to create an effective and efficient customer experience for people around the world.

 

Read more posts in this series:

 

Tags: , , ,

Nils Lenke

About Nils Lenke

Nils joined Nuance in 2003, after holding various roles for Philips Speech Processing for nearly a decade. Nils oversees the coordination of various research initiatives and activities across many of Nuance’s business units. He also organizes Nuance’s internal research conferences and coordinates Nuance’s ties to Academia and other research partners, most notably IBM. Nils attended the Universities of Bonn, Koblenz, Duisburg and Hagen, where he earned an M.A. in Communication Research, a Diploma in Computer Science, a Ph.D. in Computational Linguistics, and an M.Sc. in Environmental Sciences. Nils can speak six languages, including his mother tongue German, and a little Russian and Mandarin. In his spare time, Nils enjoys hiking and hunting in archives for documents that shed some light on the history of science in the early modern period.