What’s next.

Continued progress in reinventing how people connect with technology and each other.

From the library of What’s next: archives from R&D

Intelligence at Work: Qure.ai applies deep learning and artificial intelligence to streamline and improve radiologic diagnosis of chest x-rays and triage brain CTs

Qure.ai’s team of experts work to define clinically relevant problems and design real-world solutions that are deployed in 14 countries around the globe. The company seeks to improve diagnostic efficiency and accuracy in radiology, with an initial focus on chest X-rays and head CTs. Once FDA-cleared, Qure.ai algorithms will be integrated with Nuance’s next-generation reporting platform, PowerScribe One.
By

Access to accurate and early diagnosis has become key to delivering quality healthcare around the globe. In many locales, the doctor-patient ratio is low, and even more so in the case of specialized practitioners such as radiologists. In underserved and remote regions, radiologist expertise is scarce, costly, and unequally distributed. Even in developed parts of the world, workloads are creating burnout issues and higher error rates for radiologists. This means that not all patients receive the most accurate, timely diagnosis. AI-driven radiology solutions can automate a lot of the routine work, saving precious time for radiologists and help mitigate clinician burnout.

Chiranjiv Singh, Chief Commercial Officer of Qure.ai, shares his insights about how Qure.ai’s algorithms aim to make radiologic diagnoses more accurate and efficient, by delivering AI capabilities within radiologists’ everyday workflows, to optimize results and deliver better patient care.

Qure.ai’s product philosophy is to solve clinical and workflow needs of customers and to go deep into certain areas rather than spreading across a spectrum of clinical areas.  Qure.ai has trained its algorithms on more than 7 million exams sourced globally and prides itself on having been validated by multiple papers in peer-reviewed research.  In line with this philosophy, Qure.ai has commercially released two algorithms to date, one focusing on detecting abnormalities in chest X-rays and the other for triage and diagnostic support of head CT scans. As of writing, the CT algorithm is 510(k) Pending with the US FDA.

Q&A

Jonathon Dreyer: Tell us about your business – when and how you started and your development journey.

CS: Qure.ai is a healthcare AI startup that applies artificial intelligence and deep learning technology to radiology imaging for quick and accurate diagnosis of diseases. Our algorithms can automatically detect clinical findings and highlight the relevant areas from X-rays, CT scans, and MRIs in a few seconds. This allows physicians to spend more quality time with patients to better understand their specific case/symptoms, communicate the diagnosis, and determine and discuss customized treatment plans – leading to better patient care.

Qure.ai was founded in 2016 by Prashant Warier and Dr. Pooja Rao. Prashant is a career data scientist and entrepreneur, and Pooja is a trained clinician. Together they bring complementary skills of engineering and medicine critical to product development. From humble beginnings in India 3 years ago, Qure.ai is now present across 14 countries through 80+ deployments and has processed more than 200,000 scans.

Our solutions have been validated and reviewed by clinicians at leading healthcare institutions such as the Massachusetts General Hospital and the Mayo Clinic, among others. The Lancet published validation of our technology, making it the first radiology AI article released by the journal. Qure.ai’s software is vendor-neutral and is deployed online with cloud-based processing capabilities integrated with the radiologists’ current reporting workflow.

JD: What AI algorithms do you have and what do they do?

CS:  We have two commercially released algorithms so far and are working to get them regulatory cleared for clinical use in the US market.

  • qXR scans abnormal chest X-rays to identify and localize 18 clinically relevant findings with an accuracy of over 95%. We have deployed this in various use cases, from screening to radiology assistance, to even post-read quality control. For example, qXR can screen for tuberculosis and is used in public health screening programs globally. When used as a point-of-care screening tool for TB, followed by immediate bacteriological/NAAT confirmation, qXR significantly reduces time to diagnosis.
  • qER is designed to triage critical cases and provide diagnostic assistance in head CT scans – a first-line diagnostic modality for patients with head injury or stroke. qER automatically detects intracranial hemorrhages (ICH) and its subtypes (intraparenchymal (IPH), intraventricular (IVH), subdural (SDH), extradural (EDH) and subarachnoid (SAH)), cranial fractures, midline shift and mass effect from non-contrast head CT scans.

JD: What’s the big “Aha” moment when you first show users what your AI algorithm(s) can do for them?

CS: The first Aha moment we get from customers is the depth of our capability. Unlike other AI algorithms in the market that may detect only a few findings on x-ray, we are able to detect and show accuracy numbers on 18 clinical findings from qXR. Similarly, for qER, we detect multiple sub-types of ICH along with cranial fractures, midline shift and mass effect – a larger triage capability than what most customers have seen so far from other AI vendors.

The next big Aha is when customers see our richness of peer-reviewed publications. Every AI company wants to claim high accuracy numbers, and yet there is a lack of trust among clinicians. We take this job of building trust as core to our company and therefore have invested resources to expose our algorithms to multiple independent reviews and peer-reviewed publications that help us reduce that trust deficit. The fact that our algorithms can identify and label the exact abnormalities, as well as their locations within the scans in a matter of minutes, with near-radiologist accuracy in a clinical setting, has been our biggest highlight.

Lastly, our integration within radiology workflow is the final wow! For example, we have worked with Nuance to integrate our AI algorithm outputs in PowerScribe One to allow radiologists to consume these outputs according to their preferred workflow. We are also integrating our outputs to help to prioritize radiologist worklists using PowerScribe Workflow Orchestration.

JD: What challenges or needs did you see that drove you to focus on this?

CS: Access to accurate and early diagnosis is crucial to delivering quality healthcare. In many places around the world, the availability of specialized radiology resources is limited. And even in more developed countries, the volume is increasing exponentially, putting limits on the ability of radiologists to deliver timely, accurate diagnoses.  Burnout is increasing as well as the potential for errors.  Our solutions can help automate a lot of the routine work, saving precious time for radiologists and thereby preventing clinician burnout.

We saw this as a need and simultaneously an opportunity to leverage the power of deep learning to develop solutions dedicated to this market. Our mission is to use artificial intelligence to make healthcare more accessible and affordable.

JD: What’s the number one benefit you offer?

CS: The number one benefit we offer our users is “trust and peace of mind.” This is possible only when a product is reliable and also invisible. We want our users – be it radiologists or public health experts – to focus on their patients and trust us for the accuracy of our algorithms. We also want to embed ourselves into their workflow in a manner that almost becomes invisible to their daily practice. We believe that our AI solutions shall be successful only if we are able to build integrated solutions with companies like Nuance that solve clinically relevant problems.

This is easier said than done. It means working hard to build solutions that are globally trained and validated, built on a large volume and variety of data, and embedded into diverse clinical workflows. It’s the challenge of meeting our customers’ expectations on this benefit that keeps us up at night.

JD: Are there any stories you can share about how your algorithm(s) drove measurable patient care outcomes?

CS: One of our customers is the Philippine Business for Social Progress, a local screening agency and the first adopter of artificial intelligence algorithms for tuberculosis detection in the Philippines. Working with their team, we built a custom, end-to-end TB workflow and patient registration software that helps health workers immediately refer potential TB suspects for confirmatory tests. Our solution is deployed in multiple mobile vans that move across different pockets of Manila and have been in use for >6 months. Prior to using qXR, the time to complete a patient diagnosis was >2 weeks. We have reduced that time to <1 day (from screening to x-ray to lab tests). We have identified 25% more TB cases than the original workflow and have screened 30,000+ individuals using our AI solution.

JD: What benefits does Nuance and its AI Marketplace for Diagnostic Imaging bring to your users?  What problems does the marketplace and integration into Nuance’s workflow solve?

CS: Nuance and its AI Marketplace brings two key benefits to our users. The first benefit is that it offers a single platform to review, try, and buy AI algorithms. Customers need a trusted partner with vetted solutions that connect trusted AI developers to clinical users. The Nuance AI Marketplace does this for every stakeholder in the user organization – clinicians get access to algorithms they can evaluate for clinical accuracy; IT administrators get easy integration without running multiple deployment projects with independent vendors; purchase/finance teams get streamlined negotiations and reduced time to execute multiple contracts.

The second and equally important benefit is seen once the purchase decision has been made. For our solutions to work and be used, they need to be accessible to the users when they are reviewing images and dictating their reports. We want to embed ourselves into customers’ workflow in a manner that is almost invisible to their daily practice. Nuance offers the right point and platform for this integration into the radiologist workflow for AI solutions like ours, and we are really excited to be part of this platform.

JD: What has your experience been working with the Nuance team?

CS: The experience of working with the Nuance team has been one of dealing with a team that is not only professional but also extremely knowledgeable and proficient in diagnostic imaging and reporting workflows. They understand the use cases of bringing in technologies like AI to meet real needs of their customers. I am looking forward to this partnership as we jointly work with our customers and deliver value to them.

JD: What is your vision for how your solution(s) will evolve over the next 5 years?

CS:  In the next five years, I see us offering more comprehensive solutions across various clinical domains, solving customer challenges at various points in the diagnostic journey of patients. We will enhance our capabilities by increasing our clinical coverage beyond chest x-ray and head CT that we offer today. In terms of diagnostics workflows, we see ourselves being able to offer more measurement and diagnostic tools to aid radiologists in their reads and even do tasks like treatment progression monitoring to aid other clinical users. Five years is a very long time in the field of AI, and I am confident that Qure.ai will be a dominant global player and a trusted partner for our customers over that time frame.

JD:  In one sentence, tell us what you think the future of medicine will look like?

CS: The future of medicine will be custom designed and served, focusing both on prevention and cure, and most importantly, accessible to all.

Learn more:

To learn more about Qure.ai, please visit www.qure.ai

To learn more about Nuance AI Marketplace for Diagnostic Imaging, please visit https://www.nuance.com/healthcare/diagnostics-solutions/ai-marketplace.html

Intelligence at Work is a blog series by Jonathon Dreyer, Senior Director, Solutions Marketing, Healthcare Division for Nuance Communications. Intelligence at Work showcases projects and applications that demonstrate how Nuance technologies extend the value, use, and performance of integration and development partner offerings. This blog series focuses on inspiring the healthcare developer community to think beyond their current state and take their innovations to new heights by tapping into the latest in artificial intelligence.

Tags: , , ,

Intelligence at Work: Knee Osteoarthritis Labeling Assistant (KOALA) for detecting signs of knee osteoarthritis by IBL

Read how IBL’s KOALA AI-driven application, currently pending FDA 501K approval, can help improve assessment and diagnosis of many musculoskeletal conditions and impact patient care. It will support physicians in detecting signs of knee osteoarthritis based on standard joint parameters, and help track disease progression. It is available for review on the Nuance AI Marketplace for Diagnostic Imaging, and once approved, will be integrated with Nuance’s next-generation reporting platform, PowerScribe One.
By

As the population ages, arthritis and other musculoskeletal diseases are an increasing cause of physician visits and health care spending.  With increased prevalence comes an increased burden for rapid, precise diagnosis and staging, as well as an ability to predict future disability. Unfortunately, interpreting orthopedic images can be laborious. There is a need for standardization and simplification while providing quantitative disease parameters to support treatment decisions. Having precise measurements is the missing link to tracking the slow progression of degenerative diseases.

Dr. Richard Ljuhar, CEO and co-founder of ImageBiopsy Lab (IBL), shares his thoughts about how IBL’s AI-driven musculoskeletal imaging algorithms aim to improve assessment and diagnosis of a range of musculoskeletal conditions, including osteoarthritis (OA), osteoporosis, and rheumatoid arthritis.  The goal is driving timely and appropriate interventions to reduce morbidity and disability – relieving pain and improving patients’ lives.

Interpreting musculoskeletal images is a challenge due to the lack of objective analysis methods and standardized digital documentation of radiographic changes. Because of these shortcomings, diagnosis and predictive assumptions show significant inter-rater variabilities and are thus often unreliable. IBL uses state-of-the-art artificial intelligence technology to efficiently address these challenges, relieving physicians and researchers of time-consuming image analysis tasks, while at the same time improving diagnostic accuracy and predictive capability.

Q&A

Jonathon Dreyer: Tell us about your business – when and how you started and your development journey.

Richard Ljuhar: ImageBiopsy Lab (IBL) was founded by a team of experienced professionals and specialists in medical technology and AI, along with board-certified doctors in orthopedics and radiology. Based on personal experience of the management team, plus intensive discussions, brainstorming, and surveys of medical users, core elements of our AI modules have been successively worked on since 2012.  IBL was incorporated in 2016 and began implementing its business strategy. The initial focus has been on applying deep-learning methods to knee osteoarthritis (OA), and this was our first use case. But our modular platform technology is designed to be applicable to any orthopedic imaging data, so we have expanded beyond knee OA to other musculoskeletal disease applications.

JD: What AI algorithms do you have and what do they do?

RL: The focus of IBL is on digital X-ray and musculoskeletal diseases, with artificial intelligence-driven solutions for anatomical regions such as the knee, hand, hip, whole leg, and spine. Our first CE-marked/510k pending module KOALA (Knee Osteoarthritis Labeling Assistant) supports physicians in detecting signs of knee osteoarthritis based on standard joint parameters and OARSI criteria of standing radiographs of the knee. PANDA (Pediatric Bone age and Developmental Assessment) supports an objective and standardized determination of pediatric bone age. HIPPO (Hip Positioning) supports objective and standardized measurement of the most important hip angles based on digital x-rays.

JD: What’s the big “Aha” moment when you first show users what your AI algorithm(s) can do for them?

RL: A remark from Peter Steindl, MD, an orthopedic surgeon, sticks in my mind.  He said, “I guess my biggest “Aha moment” was that I realized the potential to measure and compare sclerosis, joint space narrowing, and OA-grades in an objective way in a particular patient over a couple of years. I think this device/software might be very helpful in finding the optimal timing for planning a joint replacement surgery of the patients’ knee.”

JD: What challenges or needs did you see that drove you to focus on this?

RL: After years of experience and discussions with medical experts, IBL identified that orthopedic diagnoses could benefit immensely from AI-driven solutions. Workflows are time-consuming and elaborate with interpretations often subjective and difficult to reproduce. Additionally, image reading and interpretation often hasn’t changed significantly since the introduction of radiography. The need to bring musculoskeletal/orthopedic radiology into the digital age drove our motivation to change the status quo. IBL’s software offers simplification and standardization while at the same time providing quantitative disease parameters to support treatment decisions.

JD: What’s the number one benefit you offer?

RL: While we support medical experts and their patients in numerous areas during the diagnostic pathway, we see the greatest benefit of our solutions in automation and in consistent documentations of radiological parameters. Big data and artificial intelligence cannot replace physicians, but they can relieve them of time-consuming routine tasks. This should allow medical experts to invest their time where it is most needed—with their patients!

JD: Are there any stories you can share about how your algorithm(s) drove measurable patient care outcomes?

RL: Our experience and that of our customers has shown that through our solutions there is a higher level of agreement between physicians, improved patient communication, more appropriate and timely therapy decisions, and an increase in patient loyalty.  In fact, we even had patients approaching us directly asking if we can run the digital analysis of their X-rays as they wanted to get an accurate assessment of their disease progression.

JD: What benefits does Nuance and its AI Marketplace for Diagnostic Imaging bring to your users?  What problems does the marketplace and integration into Nuance’s workflow solve?

RL: IBL and Nuance deliver their core value at the most critical interface of the radiology workflow: Translating the image information to a report. Our AI solutions facilitate this transition by providing quantitative and objective measurements. Thus, the flawless integration of our AI output to pre-fill reporting templates via Nuance delivers the most value to existing workflows. Being delivered at the heart of where radiologists’ time and decision making matters the most is what is streamlined by Nuance while proving a scalable IT infrastructure and customer base to build a win-win-win situation for IBL, Nuance and the physicians benefiting from time-saving and quality improvements.

JD: What has your experience been working with the Nuance team?

RL: We at IBL especially like the forward-thinking design of how AI results are injected to existing reporting workflows which made it highly attractive for us to collaborate. The early designs of the Nuance AI-driven solutions already reflect the experience and professionalism of a company with tremendous domain knowledge and ability to deliver the promised value of AI for physicians. Nuance’s responsive support allowed IBL to quickly ramp up demos and use cases, and we are very happy to be part of the family.

JD: What is your vision for how your solution(s) will evolve over the next 5 years?

RL: IBL will expand its portfolio of fully automated AI solutions for musculoskeletal radiology where automation matters the most – time-saving and objective outcome measures on standardized, high-volume tasks that enable easier comparison between repeated visits of the same patient. With this, the workload of the orthopedist and radiologist can decrease, while the quality of results can increase. And because precise measurements are the missing link to tracking the slow progression of certain MSK diseases, radiologists using IBL’s solutions deliver the perfect service to their referring orthopedists, who can apply IBL’s outcome measures to tailor personalized treatments and monitor their efficacy over time. The longitudinal structured data of our AI solutions supports powerful prediction models which use our AI results and clinical data to predict the future progression of the patient’s condition. This is possible due to IBL’s decade-long experience of building image processing algorithms and experience to transform immense datasets to actionable clinical decision support.

JD: In one sentence, tell us what you think the future of medicine will look like.

RL:  Automation and standardization will lead to an increasing amount of structured data which in turn will lead to a growing number of AI-applications in the years to come.

Learn more:

To learn more about ImageBiopsy Lab, please visit www.imagebiopsylab.ai

To learn more about Nuance AI Marketplace for Diagnostic Imaging, please visit https://www.nuance.com/healthcare/diagnostics-solutions/ai-marketplace.html

Intelligence at Work is a blog series by Jonathon Dreyer, Senior Director, Solutions Marketing, Healthcare Division for Nuance Communications. Intelligence at Work showcases projects and applications that demonstrate how Nuance technologies extend the value, use, and performance of integration and development partner offerings. This blog series focuses on inspiring the healthcare developer community to think beyond their current state and take their innovations to new heights by tapping into the latest in artificial intelligence.

Tags: , , ,

A Real View: the last mile in implementing AI

Nuance Healthcare Diagnostics Vice President and General Manager Karen Holzberger sat down with Nuance Senior Director of Product Management Sander Kloet to discuss the importance of addressing the “last mile” challenge in deploying AI for radiology, one of the key topics that he and other industry experts will cover at the RSNA regional course on AI on May 31 in San Francisco, CA.
By

Imagine that you are a busy professional whose very long days are packed with a mix of routine and critical time-sensitive tasks, each of which requires close attention and thorough and accurate paperwork. One day you learn that powerful new tools can help you get more work done in less time with improved quality and greater benefit to those counting on your expertise. But there’s a catch: to achieve these gains you must take time you don’t have to fit the new tools into your workday.

That’s the essence of the “last mile” challenge facing radiologists looking to leverage a growing number of AI diagnostic models and workflow tools to manage increasing workload volumes, maximize value-based reimbursements, reduce administrative burdens that contribute to burnout, and ultimately, improve patient outcomes. The ability to integrate AI tools into current radiology workflows seamlessly and intuitively is vital to realizing the benefits.

I recently sat down with Sander Kloet, who will lend his expertise in product design and implementation to the upcoming RSNA regional AI course by discussing the “last mile” challenge and the solutions and approaches to address it.

KH: What’s the “last mile” problem and what does it mean for radiologists and AI?

Sander: The idea of the “last mile” connoting the final leg of a journey originated in telecom and logistics to describe the work remaining to get to the intended destination or outcome. At the same time, it indicates that although there are still a few hurdles to clear the goal is within reach. In that sense it’s a highly motivating and energizing challenge.

When we think about the last mile problem for radiologists, we recognize that in order to realize the potential AI has to advance radiology it must fit seamlessly into a radiologist’s workflow and not be an add-on requiring extra steps. It must deliver both practical and clinical value as an integral part of how radiologists work. If it doesn’t it simply won’t be used.

The key from a product design perspective is to think comprehensively. For example, image characterization algorithms can be invaluable in helping radiologists identify pulmonary nodules or brain bleeds quickly. But those results need to be delivered before the radiologists has read the study and dictated the report, otherwise they have to take additional time to review the AI findings and modify their reports if needed. That also means making sure that image processing is optimized so that the AI results are available promptly alongside the images from the PACS and history from the patient’s EHR. Those are complex issues but getting the workflow right is essential.

KH: How is access to AI models integrated into the workflow?

Sander: That’s a two-part issue. The first part is simplifying the development and deployment of the many different algorithms that are needed to address the wide variety of modalities, exams, and specialties. A radiology department could potentially require over a hundred algorithms from dozens of developers, each addressing a specific diagnostic use case. Developers need to be able to reach users at scale to justify app development. Healthcare systems need to consolidate vendor access, so they don’t have to establish relationships with every developer they want to work with. Adoption of AI-driven solutions will take a frustratingly long time if there’s not a unified market where developers can reach large numbers of radiology users who can easily discover and purchase new models. That’s where the Nuance AI Marketplace for Diagnostic Imaging comes into play. It’s essentially an app store for AI diagnostic models and workflow optimization tools. It connects the 75% of radiologists and 6,000 healthcare facilities in the U.S. who use Nuance radiology reporting or image sharing solutions with AI algorithm developers in a collaborative marketplace, with a built-in feedback channel for continuous improvement.

The second part is that access to the AI Marketplace is integrated into the radiologist’s workflow tools, the worklist, the PACS and the Nuance PowerScribe reporting system. That allows AI Marketplace clients to quickly evaluate and use the latest AI solutions and then seamlessly integrate the results into their current workflows.

KH: That covers AI model access, but what about enhancing workflow and augmenting radiologists’ expertise with AI?

Sander: Yes, good question. Physicians know from past experience that new technologies that promised improvements instead impeded their ability to deliver quality care. It was a case of the doctors having to serve the needs of the technology instead of the technology serving the needs of the doctors. Our fundamental top-down mission at Nuance is to create technologies and solutions that not only get out of the way but really empower clinicians to do what they love – take better care of their patients.

Ensuring that access to AI models is seamless from within the PowerScribe workflow is one way. A great example of that is the FDA-cleared ICH detection application developed by Aidoc and deployed at the University of Rochester to prioritize unread exams. It analyzes CT exams indicating a suspected intracranial hemorrhage and then prioritizes them on the PowerScribe worklist for a radiologist’s immediate attention when time-to-treatment is critical.

Another excellent example is the new PowerScribe One platform. It helps radiologists review and if necessary, edit AI results and automatically prompts the users with appropriate follow-up recommendation based on the ACR AssistTM clinical guidelines.

All of that is driven by our innovations in natural language processing and clinical language understanding (CLU) that actually understand the meaning and context of what the radiologist is dictating and correlating it with the AI findings. It recognizes and stores the narrative report contents as structured data, all without requiring the radiologist to change how they work or add additional steps. That’s a very big deal because it can make every part of a report accessible to the EMR and to clinical data analytics. Now, incidental findings, follow-up recommendations, and many other radiology report elements can be leveraged and tracked in ways that previously were too difficult or impractical.

I think it’s important here to note the importance of combining workflow-integrated access to AI with the collaborative feedback loop of the AI Marketplace. Access from within the PowerScribe desktop makes AI usable from a practical point-of-view. Giving radiologists and developers a built-in channel to share feedback on AI model implementation and results makes it truly useful. It enables ongoing refinement of AI models for improved accuracy and specificity and addresses radiologists’ preferences and priorities. It creates a virtuous cycle that builds confidence and capability in the technology and fosters increased adoption.

KH: What should radiologists expect as we move forward on closing the AI last mile?

Sander: In a word, I would say “momentum.” By that I mean accelerating progress toward widespread practical adoption in the near term. As I noted earlier, there are still multiple challenges ahead. For example, there will be issues connected to using AI including how reimbursements will be structured, and things like access to diverse training data to create robust diagnostic models. We also are seeing interesting report creation challenges resulting from data generated by AI that was previously impractical to obtain by radiologists, and we look forward to collaborating with our clients to determine how to leverage all this data in reports in the future.

The growth and advancements we’re already seeing with the AI Marketplace, PowerScribe One, and CLU are really making the destination more clearly within reach than ever before. We’re also seeing work by multiple stakeholders on issues like reimbursements, for example, and by the ACR Data Science Institute on the data challenges. As you noted in a blog post late last year after the RSNA conference, there has been a real sea change in the outlook for AI within the radiology community. It’s highly motivating.

Ultimately, where we end up at the end of that last mile is using AI to augment radiologists to enable them to work more effectively and efficiently, meaningfully address burnout, and most of all, improve patient outcomes.

KH: Thank you, Sander. It’s exciting to hear the details of how we and the radiologists with whom we work closely are addressing these last mile challenges. Beginning on May 31, Sander will share these and other insights during RSNA’s spotlight course, “Radiology in the Age of AI.”

The Real View is a Q&A blog series with Karen Holzberger, Vice President and General Manager of Nuance Healthcare’s Diagnostic Division. The Real View cuts through the hype and gets to what’s real, here, and now. The blog series features interviews and insights from health IT movers and shakers – and uncovers disruptive technologies that solve challenges, optimize workflow, and increase efficiencies to improve patient care.

 

 

Q&A: How our latest internal hackathon brought innovation to non-profits

We rise by lifting others. That’s one of the reasons why our latest internal hackathon Innovation Challenge took on a different mold – one geared towards leveraging employee’s innovative drive and creativity for social good.
By
Nuance SS11 Innovation Challenge

We rise by lifting others. Robert Ingersoll said this a long time ago and it still rings true. And it’s one of the reasons why we ran a different kind of internal hackathon from our usual product-focused event. The SS11 Innovation Challenge for Social Good leveraged Nuance employee’s creativity for social good while empowering them to create relevant solutions. It allowed them to target real needs and problems from non-profit organizations that could be solved with Nuance’s technology. Girl Scouts, a Performing Arts Ticketing Service, an organization for at-risk high-potential students and a political campaign are just a few of the nonprofits for which our teams designed. And, it was fun! Eduardo Olvera, Senior Manager, User Interface Design of the Cognitive Innovation Group led the event with leader Guy Hoovler, Senior Director Professional Services of the Enterprise Division. Here’s what they had to say:

 

Why hold an Innovation Challenge for social good?

Guy Hoovler: Innovation challenge events demonstrate our division’s support for innovative ideas, committing significant time and resources to support our employees’ innovative and competitive initiatives. Incorporating the “social good” element this time sparked creativity in a number of the teams who were motivated by the challenge to solve real world problems, using our tech in a way that did something more than provide a bank balance.

 

Nuance SS11 Innovation Challenge

 

What is the impact of the event and the ideas? How will this help non-profits?

Eduardo Olvera: To facilitate team-building and cross-training within the Enterprise division while making the team’s ideas and solution relevant by tying them to real world problems. Non-profits benefit because the nature of their problems makes them suitable to be solved with technology.

Nuance teams built innovations for nonprofits

 

What was the most exciting part of the event?

Guy: Seeing the excitement and energy the teams brought to their presentations, especially while watching some of the teams execute live tech demos that actually worked!

Eduardo: For me, it was the response we received from participants, supporters and organizers. We had a very successful event, with 9 total teams with over 40 participants, all of which made it to the final line. We received support from upper management, directors and managers, local liaisons, subject matter experts, IT specialist, facilities administrators, human resources and legal staff. This event also marked many ‘firsts’ in the history of Innovation Challenges: the first Enterprise challenge that combined teams (CIG + PS), multiple locations, a curated list of projects teams chose from, an emphasis on social good, a platform used for team’s development, flexibility across a larger development period, idea checkpoints and SME office hours.

 

Nuance SS11 Innovation Challenge

 

What does innovation mean to you?

Eduardo: Innovation is the application of creative processes and ideas in novel and useful ways that add value and solve real world problems. The biggest mistake I see is people not realizing innovation is bigger than a product or technology platform, which means companies tend to not put the required level of support required to make it happen and then grow.

Guy: Innovation involves rethinking both the problem and the solution and implementing what is needed to fill in the gaps. I find things to be most innovative when they illustrate how we’ve been focused on the wrong questions.

 

Why is innovation important for not only Nuance, but for the greater community?

Eduardo: Because those companies, organization and communities that truly understand innovation, install strategies around it and execute it well, are the ones that achieve and sustain long-term success while at the same time keeping their employees, members and volunteers feeling satisfied, fulfilled, productive and excited about their personal and professional future.

Guy: Innovation keeps our minds agile – whether we do it ourselves or appreciate it when done by others.  Seeing and doing innovation both serve to break us out of today’s complacency and get us thinking in a constructive context about what happens next.

Nuance SS11 Innovation Challenge

 

Needless to say, it was a tight (and impressive) competition. In the end, the winning team decided to donate their money to the Water Project, a non-profit working to end the water crisis and provide access to clean, safe, and reliable water across sub-Saharan Africa. We were honored to donate $1,000 to help them further their mission and our teams are looking forward to the next Challenge.

 

Tags: ,

Nuance SS11 Innovation Challenge

Sorry, Team “Yanny” – AI says it’s “Laurel”

Is it Laurel, or is it Yanny? The audio clip that has divided the world has an answer – and according to Nuance researcher Nils Lenke, it’s backed by artificial intelligence.
By
yanny-laurel-audio-clip-headphones

Back in 2015, the internet was in an uproar over the phenomenon known as “The Dress” – a seemingly innocuous photo of a black and blue (or was it white and gold?) dress that prompted experts to investigate the science behind human vision and color perception. This week, a new sensation has ignited another fiery debate.

While studying for an exam, a high school freshman found an audio clip for the world “laurel” on Vocabulary.com – except to her, it didn’t sound like “laurel” at all. The recording was posted to Reddit and chaos ensued. The sound bite left listeners astonished that they could hear something so entirely different than the person sitting next to them.

 

So, is it Laurel or Yanny? The audio clip that has divided the world finally has an answer. Well, another perspective at least. And this one is backed by artificial intelligence.

We used our Dragon platform, the speech recognition software behind many Intelligent Assistants in the car, on TVs, IoT devices, and beyond, to find out what it would make of the clip. The result was “Laurel,” without a doubt.

Nils Lenke, senior director of corporate research at Nuance, said: “Dragon hears ‘Laurel.’ Speech recognition technology today is based on artificial neural networks that are supposed to mimic the way the human brain works. We train these networks on thousands of hours of human speech, so it can learn how the phonemes – the smallest building blocks of language – are pronounced. But it is clearly not an exact copy of how we as humans hear and interpret sounds, as this example shows. So, we can all still feel “special”– especially those who hear ‘Yanny’!”

Artificial intelligence stands with Laurel. Maybe those of us in camp Yanny will be on the right side of technology for the next viral controversy.

Tags: ,

yanny-laurel-audio-clip-headphones

A tribute to Stephen Hawking and his iconic voice

Stephen Hawking, the famous cosmologist and theoretical physicist, passed away last week at the age of 76. Like much of the world, I was fascinated by both his accomplishments and his iconic voice. Having spent my career in the Text-to-Speech field, I was lucky to have met Hawking once, working on his synthetic voice then and again years later, with Nuance.
By
Stephen Hawking speaks famous words with his computer-generated text-to-speech voice

The famous cosmologist and theoretical physicist Stephen Hawking passed away last week. Apart from his scientific contributions, he was also a role model for people living with a disability. “Concentrate on things your disability doesn’t prevent you doing well, and don’t regret the things it interferes with,” he said in a 2011 interview with the New York Times, “Don’t be disabled in spirit.” Millions of people became familiar with his synthetic, computer-generated voice, which he began using after losing the ability to speak in 1985.

The synthetic voice he used for more than 30 years was generated by a circuit board named CallText 5010, made by a company called Speech Plus, which is now part of the Nuance family. Hawking originally owned three copies of CallText, but one board broke after it fell to the ground. Concerned that the hardware would break or fail to work in the future, Intel, who had started providing him with a PC and technical support, wanted to replace his hardware synthesizer with a software version. They didn’t want to risk leaving the scientist without his voice again.

When I was a postdoc at the Oregon Graduate Institute working on speech synthesis, I was contacted to help with the project. In the following months I borrowed Hawking’s spare CallText board and recorded 2,000 speech sounds called diphones with it (synthesis by concatenation of diphones was the dominant text-to-speech technology in those days.) When Professor Hawking was in Oregon for a lecture a few weeks later, I presented the new voice to him at his hotel room in downtown Portland. I connected it to a loudspeaker so he could hear the sample sentences I had prepared from his lectures read aloud. After a few minutes of silence (during which Hawking was typing), came the reply: “I like it. But more importantly, will my wife like it?”

That same evening, I attended his public lecture. I remember feeling a personal connection as he was presenting, and a sense of privilege to have been a part of his story, no matter how small. Hawking ultimately continued to use his original circuit board synthesizer during public appearances. After all, it was that voice that the world had come to recognize as the iconic Stephen Hawking; the new implementation just didn’t sound quite the same. Hawking’s synthesized voice was as much a part of him as our natural voices are of us.

 

 

In late 2017, we revisited a project first discussed a few years early: The project entailed Nuance working with Professor Hawking and his team, agreeing to provide him with a version of the source code of his TTS voice, which we had stored in an archive. The goal was to transition Hawking’s system to a modern, software version while maintaining the authenticity of his original voice. Unfortunately, he passed before we could complete our work together.

Hawking once wrote on his website, “I try to lead as normal a life as possible.” Ironically, there wasn’t all that much normal about him. His wit was unmatched. He once acknowledged that although he had a PhD, “women should remain a mystery.” He was a brilliant physicist, a renowned cosmologist, a respected professor, and a prolific author. He won countless awards and held thirteen honorary degrees. Hawking’s professional success was matched only by the strength and depth of his personal relationships. “It would not be much of a universe if it wasn’t home to the people you love,” Hawking once said. He will be deeply missed by his family, friends, colleagues, and the countless people he inspired in the universe he helped explore. It is a gift to all of us that even when he could not speak, Stephen Hawking never lost his voice.

 

Tags: ,

Stephen Hawking speaks famous words with his computer-generated text-to-speech voice

How the machines will adjust to us: A short story about “conversational AI” growing up

2018 is going to be an exciting year to witness the start of a huge leap into the area of conversational AI. Josefine Fouarge takes a look into how it has developed so far and where it's going very soon.
By

For years we have been trained on how to interact with machines – how to use a mouse, what to click for a specific action, and maybe even how to write code in a variety of languages. But talking, gestures, or facial expressions are natural ways for us to communicate. Machines that can understand these nuances have only been subject to Hollywood interpretation so far.

“So far” are the key words here. Technology has evolved in a way that it can interpret the human language and draw a conclusion based on what was said or texted. The complex part here is not just the algorithm, though; it’s the ability to combine phonemes to words for speech recognition, letters to words for text recognition, and either one of them to meaning – and then react based on that. 2018 is going to be an exciting year to witness the start of a huge leap in that area, because today’s technology is already capable of engaging with humans in a conversational way.

 

Where do we start?

Where do we see conversational interfaces? Chatbots and virtual assistants are probably the most known example. Used in customer service scenarios, conversational interfaces can do a lot of things already. They can react to very specific scenarios like resetting a password, updating the address or helping with selecting a specific product. Usually, these can be found on a brand’s website, in their messaging and social channels and even in the IVR.

If you have used a smart speaker like the Amazon Echo before, then you have dealt with a machine that interprets your words into meaning for itself. For example, when you ask Alexa to play music it will analyze your request and then, as a result, start to play some tunes. Have you ever called a brand and it told you to simply ask a question instead of pressing “1”? That’s basically a virtual assistant with speech recognition.

 

What’s the next step?

There is a variety of conversational interfaces available, for example the ones that provide a list of items from which a user can pick; others react on specific keywords and can be used as a simple Q&A. The next step of these rather “simple-minded” versions is a conversational interface that is capable of handling all sorts of conversations, back and forth, without the need for human intervention. Today’s “state of the art” virtual assistant can disambiguate without a pick list, just by asking for the missing information.

 

That’s the goal.

The final, so far unsolved, stage is actual complex interactions. Something that could simulate a heated discussion, a brainstorming with a colleague, etc. Things that require a lot of external data or background information that influence the conversation. These are the areas on which Nuance is working, bringing automated conversations from a simple back and forth to an actual conversational tool that will allow you to augment your life.

To give you an idea of how this future could look, watch our vision of next generation omni-channel customer engagement.

 

Discover the intelligence behind our conversations

Conversational AI lets consumers engage in natural interactions through text or speech to gain immediate access to information and easy, effortless outcomes through IVR, messaging or web channels.

Learn more

Tags: , , , , ,