This post is part of a series that explores the unique complexities of human speech and, consequently, how we create systems that appropriately take these complexities into account when interacting with users.
Now that I’ve introduced you to the prominence of rhetorical devices in everyday speech in my first post, let’s move on to another concept stemming from the Ancient Greeks, known as paraphrasing, or, “saying the same thing in other words.” The preceding sentence already demonstrated the principle: it offered a word and its definition as a paraphrase to it. This is often used in dialogs as a tool to establish that things have been understood correctly. The recipient of a message will repeat it in his own words: “did you mean: …..”, “are you saying….” etc. indicate this technique. There is a borderline case here as well – saying the phrase in the same words again – which is more “repeating” than paraphrasing, but can serve the same purpose.
The advantage of paraphrasing is that it can exclude ambiguities in the original wording to get the same message or inference across. So as we discussed in the original post on rhetorical devices – paraphrasing is something that, over time, humans learn and adapt to as part of language. But for machines and devices, this too is another element of the human language that they must learn.
In fact, automatic systems already use paraphrasing a lot; Nuance’s “Florence” virtual assistant for healthcare professionals is a good example. At one point the following turns occur:
Physician: “sign note and send a copy to my assistant”
Florence: “note signed and sent to Christine”
This demonstrates a mix of repetition and paraphrase, replacing “my assistant” with “Christine,” which is especially helpful if there might be a potential confusion as to which person the user is referring to. It also avoids full repetition and ultimately sounding like a parrot. What Florence does here is what our user interface designers call “implicit verification:” giving a paraphrase of what was understood to allow the user to abort or undo an action, but without explicitly asking for verification. In the case of the latter, Florence would have paused and asked, “Is that correct?” The above example also demonstrates multimodal “paraphrasing” and verification – something designers and engineers commonly build into systems to enable a richer user experience. In many instances, Florence displays the text she understood instead of verbalizing it (again), and occasionally combines that with a verbal (explicit) verification: “Is the order [on the display] correct?”
Closely related to paraphrase is periphrase: verbalizing content in a certain form instead of (not in addition to) another form. For example, in order to avoid inappropriate or possibly offending language, or complicated scientific terms, we can circumscribe these concepts in other words. In Victorian and Edwardian England, authors occasionally went so far as to switch to another language (Latin, or even classical Greek).
Today, there are design and language considerations around this “adult” language. A user can use some adult language, but a designer or developer may not want the system to echo that in its response (although there are some that do – it’s all a matter of preference) and so they opt for it to employ periphrase. In this case, the system needs to be able to use different words for the same meaning and understand when to use which word or phrase.
To us, these facets of speech seem all too common. But to machines, it’s years and years of advancements that got us to the point we’re at now – with innovations like Florence being able to leverage paraphrases to more naturally interact with users.
This is only one aspect of the much more general task of selecting appropriate words and phrases that is part of Natural Language Generation (NLG). For today’s intelligent virtual assistants, this has moved us away from only being able to use “canned text” and templates to full blown NLG, starting with content selection and planning what to say, and then moving to surface generation where the system selects the appropriate words to satisfy the user. Even with all of this progression, you will still see more advanced NLG make an appearance in systems soon.
Language is an incredibly powerful communication tool with many facets. When it comes to our virtual assistants, teaching the systems our language as well as designing their ability to converse in the way we as humans do is incredibly complex – making our virtual assistants all that much more intelligent (not to mention enjoyable to interact with).
Coming up next: How machines make sense of anaphora