Innovating dialog: How machines use and make sense of ellipses

This post is part of a series that explores the unique complexities of human speech and, consequently, how we create systems that appropriately take these complexities into account when interacting with users. In the final installment of this series, we examine how machines interpret ellipses, a rhetorical device with Greek roots.
By
How machine speech systems use and make sense of ellipses rhetorical devices

This post is part of a series that explores the unique complexities of human speech and, consequently, how we create systems that appropriately take these complexities into account when interacting with users.

Rhetorical devices are commonly used in our speech, and while we naturally come to use, recognize, and understand them in our daily lives, machines must be taught to do the same. I have already introduced you to how machines make sense of paraphrasing, adult language, and anaphora, and today we’re going to explore one final example: ellipses.

Ah, yes – finally, ellipsis. Or should I have said: Finally, let’s consider ellipsis? Often it is not necessary or even a good idea to speak (or write) in complete sentences (although I know they tried to teach you to do just that at school). This is because some of the information included in a complete sentence may already be ‘understood’ by the listener (or reader); it’s a wasted effort for both parties involved. This mutual understanding may be because of the context (things that can be seen and are obvious) or the dialog history. So, for example, instead of this dialog…

User: What are the opening hours for Ethan Allen in Cambridge?

System: 10AM – 8 PM today.

User: What are the opening hours for West Elm in Cambridge?

…it is much more natural (and convenient) for the user to ask in his second question:

User’: And what about West Elm?

This is obviously elliptic in that it is not a valid sentence in English (it doesn’t even has a verb!). And in isolation, the listener would be lost when asked to ‘guess’ what the missing parts might be. But because of the context, and because “Ethan Allen” is of the same category of words as “West Elm,” it is obvious that the user wants the system to use the previous utterance as a ‘role model’ and just replace this new information for the old. In a different context the same utterance could be resolved quite differently:

User: Tell me about job opportunities at Ethan Allen.

System: ….

User: And what about West Elm? (= Tell me about job opportunities at West Elm)

And, you can also ‘patch’ more than just one bit of new info into the old sentence, in our first example:

User’: And what about West Elm in Boston?

To make this all work, it helps that sentences typically have a set of ‘slots’ that stem from the verb as the center piece of a sentence. Date/time and place are two very common ones; and in Local search (like in the Ethan Allen/West Elm example), businesses play a big role as subjects or agents of things being asked for. For example, Dragon Mobile Assistant properly interprets utterances with ellipses involving such slot types.

Will it be sunny tomorrow? How about in three days?

How is the weather in Montreal? In Toronto?

The system actually uses the ‘thinking’ described above:  Parsing of input sentence assembles – among other things – a set of ‘slots’ (like time, place, etc.).Then, if a subsequent utterance, 1) looks incomplete (we call that a “fragment”), and if 2) the fragment can be mapped to one (or multiple) of the slots from the previous utterance, the system assembles a complete sentence by reusing the old slots where no new info was provided, and patching in the new information.  This, however, is just one of many types of ellipsis.

So, in summary, evidently you may not have to brush up on your Greek after all, but we do have to make sure our systems know their παράφρασις, ἀναφέρειν, and ἔλλειψις.

Explore other posts in this series:

Innovating machine dialog: Brush up on your Greek and read Aristotle

Innovating dialog: How machines make sense of paraphrasing and adult language

Innovating dialogue: How machines make sense of anaphora

Tags: , ,

Nils Lenke

About Nils Lenke

Nils joined Nuance in 2003, after holding various roles for Philips Speech Processing for nearly a decade. Nils oversees the coordination of various research initiatives and activities across many of Nuance’s business units. He also organizes Nuance’s internal research conferences and coordinates Nuance’s ties to Academia and other research partners, most notably IBM. Nils attended the Universities of Bonn, Koblenz, Duisburg and Hagen, where he earned an M.A. in Communication Research, a Diploma in Computer Science, a Ph.D. in Computational Linguistics, and an M.Sc. in Environmental Sciences. Nils can speak six languages, including his mother tongue German, and a little Russian and Mandarin. In his spare time, Nils enjoys hiking and hunting in archives for documents that shed some light on the history of science in the early modern period.