Part 2 – AI for customer care: Turning ‘bags of words’ into meaning with machine learning

In part two of my series on human assisted virtual agents, I examine how machine learning is applied to a so-called “bag of words” to help machines understand the human language.
By
Machine learning turns bags of words from big data into big knowledge for customer care

In my last post, I discussed how human agents and human assisted virtual agents (HAVAs) can work together when machine learning and artificial intelligence are applied to customer care systems. Now let’s take it a step further.

In machine learning you often need to compare or “match” things.  For example, when you are looking for the right answer in a database, you compare the question to the possible answers stored there. If you want to sort intents into buckets (so-called clustering) you need to compare them with each other and see how similar they are. In many modern approaches you do this by only looking for words to be there (or not) at face value, and in any order, an approach that is also called quite intuitively “bag of words.” If two sentences or texts are roughly composed of the same words, so the intuition, they are probably similar and capture a similar meaning. This approach works surprisingly well for many tasks (classical Internet search relies on this approach), although it seems to ignore that language is actually more than a bag-of-words: sentences have a structure and words have a meaning. Let’s look at these example sentences.

  1. How do I change the motor oil?
  2. Tell me how the engine lubricant gets replaced.

From the superficial bag-of-word perspective these look very different, although intuitively they capture a similar request or meaning and customers would expect an HAVA to understand that. Purely statistical approaches solve this by making the observation (after looking at thousands and thousands of texts) that the words “oil” and “lubricant” often appear in similar contexts and in that way implicitly learn the meaning of a word by identifying it with the contexts a word typically appears in.

However, there is a very old tradition within Computational Linguistics and Symbolic AI to capture aspects like structure and meaning of language more explicitly. For one, you try to capture the structure by assigning a syntax tree to a sentence, or an utterance. One class of such structures, so-called dependency trees, starts from the observation that the core of a sentence is the verb and the other words “depend” on the verb; similarly adjectives and other modifiers depend on the noun they are next to. Simplified dependency trees for (1) and (2) above could look like this:

machine learning syntax dependency tree diagram

And if you look at the parts circled in red you can see that they have become similar in structure. So if only we knew that change/replace, oil/lubricant and motor/engine mean the same or at least similar things, we would be there. In fact, many efforts have been made to capture such similarities, to sort words into buckets of similar meaning and organize these buckets in hierarchies of concepts. Not the first but a well-known one was Roget’s Thesaurus. Its modern, machine-readable equivalent is WordNet, a collection of 155,287 words mapped to 117,659 concepts (as of today!).  And if we look at what it has to say on “engine,” we will see that it lists “motor” as “sister” term to “engine.”

S: (n) engine (motor that converts thermal energy to mechanical work)

In WordNet lingo that means “engine” and “motor” are in the same “synset,” we could also say they represent the same concept. So, if we now replace words by “synsets” in our two trees, they will become very similar or even identical in the relevant area. That way, measuring the similarity of text passages will be a lot more precise (as we will see later).

Now, the use of lexicons and syntactic structures will strike some people as a little old-school, pitting Symbolic Processing against Machine Learning.

But we at Nuance think differently: why not combine Machine Learning and symbolic processing? Enriching the raw data with syntactic and semantic information helps to turn mere “big data” (think of it as lots and lots of “bags of words”) into “Big Knowledge.” This can then be applied to HAVAs for a better customer interaction. We will explore what else this means for customer service in our third and last post of this series.

 

Tags: , , , , ,

Nils Lenke

About Nils Lenke

Nils joined Nuance in 2003, after holding various roles for Philips Speech Processing for nearly a decade. Nils oversees the coordination of various research initiatives and activities across many of Nuance’s business units. He also organizes Nuance’s internal research conferences and coordinates Nuance’s ties to Academia and other research partners, most notably IBM. Nils attended the Universities of Bonn, Koblenz, Duisburg and Hagen, where he earned an M.A. in Communication Research, a Diploma in Computer Science, a Ph.D. in Computational Linguistics, and an M.Sc. in Environmental Sciences. Nils can speak six languages, including his mother tongue German, and a little Russian and Mandarin. In his spare time, Nils enjoys hiking and hunting in archives for documents that shed some light on the history of science in the early modern period.