I recently had the exciting opportunity to engage with Tapestry Networks, as part of their joint effort with the Gordon and Betty Moore Foundation, to shed light on the safe and effective adoption of machine learning-diagnostic decision support (ML-DDS) technologies.
As part of the project, the team at Tapestry held a series of conversations with key stakeholders across the healthcare industry from AI developers and vendors to payers, academics, and community health systems.
They provided a summary of the learnings from those discussions in a recently published ViewPoints report.
The report, which also explores demonstration projects in this space, underscores the need to examine where we are when it comes to quality assurance in AI diagnostic decision support technologies and what is required to advance that position both to succeed today and into the future.
AI is evolving at a rapid pace and brings huge potential both in reducing provider burnout and improving patient care. And while we know AI can improve outcomes, how exactly to measure its impact, especially in diagnostics, is up for debate. The ViewPoints report analyzes why that is and suggests potential paths the industry can take to standardize how AI performance is measured.
Here’s what I found especially noteworthy in the write up.
ML-DDS has a role to play in the future of healthcare
There is huge potential for AI to positively affect diagnostic outcomes and most stakeholders are optimistic about its future. Alongside that optimism, however, is caution.
The report highlights that while AI is advancing to become a promising tool for diagnostics, it lacks standardization and quality control today. A system is necessary to ensure AI is being appropriately developed and applied. Any system that is put in place to standardize how we measure and assess AI development and performance, however, needs to consider the rapid advancement of the AI and how it will integrate with and best support the broader, dynamic and evolving healthcare ecosystem.
Quality assurance (QA) requires multi-stakeholder commitment
It’s important for the industry to consider who should take the lead in QA and what incentives exist for an organization to do so.
The report notes the limits organizations like the FDA and medical academics have when it comes to establishing guidelines while calling out the “chicken and egg” scenario that emerges with payers and providers around diagnostic AI reimbursements. An independent third party that could offer centralized standard setting would be ideal, but no matter who is leading the charge it is evident that for any standard to move forward there must be commitment from the right stakeholders and the industry at large.
Progress must be the focus
While it is important to think critically about how to best solve for QA in ML-DDS technologies, finding a perfect solution is not realistic. Instead, the report looks at ways in which incremental progress can be made.
Any step forward that provides insight into AI transparency, validation, evaluation, and monitoring can help inform how we standardize QA efforts. The report summarizes three proposals that could help drive progress, two centering on evaluating the AI readiness of health systems and a third focused on AI quality standards and evaluation of real-world performance within a specialty—all efforts worth celebrating.
At Nuance, we’ve always been committed to ensuring our AI solutions are built to solve real-world problems. With a purpose-driven approach, we ensure the investments customers make in our technology deliver outcomes for both their organizations and the patients they serve.
Continuing to advance the industry’s understanding of AI’s impact and value is a worthy initiative and I welcome discussions like the one we had with Tapestry to create dialogue that will ultimately allow us to achieve a future where AI is instrumental in advancing care across the healthcare continuum.
To read the full report visit: https://www.tapestrynetworks.com/publications/diagnostic-ai-technologies.