How I learned my ABCs: The similarities between AI and toddlers

Artificial intelligence (AI) is transforming healthcare decision-making and will be a big topic of conversation at this year’s HIMSS. From improving the accuracy and quality of clinical documentation to helping radiologists detect abnormal images to make them high priority, AI is freeing clinicians to focus more of their brain cycles on delivering effective patient care.
By
Neural networks work like the human brain creating connections and getting smarter based on data and corrections similar to how a toddler learns the differences between animals.

Now, thanks to the impact of deep neural networks (DNN), the application of AI and machine learning to healthcare may finally be reaching a crucial tipping point.  But what are neural networks? One of the best ways to understand this is to think about how children learn.

I’ve been teaching my two-year old about animals, pointing to different ones in a book. It struck me that there are a lot of similarities in the basic elements of animals, yet small children are able to learn and tell them apart. Four legs and a tail— this could be almost any land-dwelling animal. But one has a very long neck while the other has a trunk. These distinguishing characteristics help our brain analyze the information and arrive at the correct conclusions:  A giraffe versus an elephant.

Neural networks are designed to work in much the same way the human brain works. An array of simple algorithmic nodes—like the neurons in a brain—analyze snippets of information and make connections, assembling complex data puzzles to arrive at an answer. The “deep” part refers to the way deep neural networks are organized in many layers, with the intermediate (or “hidden”) layers focused on identifying elemental pieces (or “features”) of the puzzle and then passing what they have learned to deeper layers in the network to develop a more complete understanding of the input and produce a valid output.

Just like my two-year-old, and all other humans, the network is not born with specific knowledge; it must be trained, like understanding the difference between a giraffe and an elephant noticing one has a big tail and the other is short. By feeding the network large amounts of data with known answers, we are effectively “teaching” it how to interpret and understand various inputs— this is also known as “machine learning.” For example, training a DNN to perform medical transcription might involve feeding it billions of lines of spoken narrative and resulting textual output to create a “truth set”—spoken words connected with accurate text. The truth set expands over time as the DNN is subjected to more inputs and the network’s ability to deliver the correct answer becomes more robust. If it gets something wrong, the DNN then must be corrected to reinforce its understanding. Like a toddler just learning to identify colors, shapes and animals, the DNN will soon be able to deliver the right answer.

So how are DNNs changing the way healthcare is practiced? Two areas among many potential applications include clinical documentation improvement (CDI) and radiology image processing. Clinical documentation includes a wide range of inputs, from speech-generated or typed physician notes to labs and medications. Traditionally, CDI involves having domain experts review the documentation to ensure a physician put into documentation an accurate representation of a patient’s condition and diagnosis. However, this approach requires time and resources, and can be disruptive to physician workflow. One approach to automating this process is an arduous, complex processing task that involves capturing and digitizing the domain expertise to create a knowledge base, then applying natural language processing technology to then generate a query for the physician in real-time as she is entering her documentation.

Neural networks improve this process dramatically. Now we can use historical clinical documentation from physicians, including the queries generated by domain experts, to create a truth set for training the neural network. This allows us to skip all the complexity in the middle. The DNN figures that out for itself, based on what it “learned” from the historical truth set. Ultimately, this helps improve documentation by having AI figure out the missing pieces or connections to advise physicians in real time while they’re still charting.  What AI is doing here is allowing physicians to focus on patients while the system manages the billing codes, regulatory requirements, quality measures and safety indicators in records.

DNNs are also changing the game for evaluating visual data, including radiological images. It takes the highly experienced set of eyes of an expert who has studied thousands of similar images to read the subtle clues found there. With neural networks, we can leverage this experience by training the network with thousands of radiological images with known diagnoses. The more images fed through it, the more “experienced” and accurate it becomes, enabling the network to detect the subtle differences between a positive finding and a negative finding. This technology is going to augment the busy workflow of the radiologist and truly amplify their knowledge and productivity by helping them to do things like prioritize the most critical studies. Today when some radiologists read 100 images a day, having AI sift through and spot atypical images to prioritize them first delivers value to physicians and patients who are both looking for the best outcomes.

The possibilities for neural networks are incredibly exciting—they are powerful tools for augmenting human expertise, not replacing it. Clinicians today have so many responsibilities, and AI is a promising way to help offset that work and allow them to focus more on patient care and activities that require a human touch.

Want to chat more about AI and neural nets? Visit us at HIMSS17, booth #2546, where we’ll be highlighting Nuance AI-powered healthcare solutions at a “Try-it Station.”

Explore AI for medicine

Learn how artificial intelligence is helping physicians focus on patients while technology supports efficient decision making and clinical documentation with Nuance AI-powered solutions.

Learn more

Tags: , , , ,

Joe Petro

About Joe Petro

Joe Petro is the senior vice president of engineering for the Healthcare division where he leads a large global team on research and development of the entire Nuance Healthcare product portfolio, and collaborates with clients to bring solutions to market. Prior to Nuance, Joe was the senior vice president of product development at Eclipsys Corporation, a major player in the electronic health record (EHR) and clinical provider order entry (CPOE) space. While at Eclipsys, Joe served on the executive staff where he was responsible for the development of more than 30 products from ADT, Departmental, Inpatient, Ancillaries, Patient Financial Management and Outpatient solutions. He earned an M.S. in Mechanical Engineering from Kettering and a B.S. in Mechanical Engineering from University of New Hampshire, graduating both with Summa Cum Laude accolades.