At the virtual HIMSS Machine Learning & AI for Healthcare event Tuesday afternoon, artificial intelligence researchers and technology leaders from Duke, Mayo Clinic and UC Berkeley and others will unveil their new Health AI Partnership.
Designed as an “innovation and learning network” to help to enable safer, more effective deployments of AI software, the collaborative project aims to help address some big challenges as applications of these new algorithms, for use cases large and small, accelerate across healthcare.
In the session, “Health AI Partnership: Creating Responsible and Effective AI Software,” experts will discuss the new initiative.
They’ll describe how it hopes to tame some of the “wild west” aspects of the current market – tamping down hype-driven sales strategies for AI and machine learning products and arming health IT decision-makers with academic research and vetted guidelines to help them make sound investments in dependable technologies.
“There is an urgent need to cultivate capabilities across healthcare delivery settings to conduct AI software procurement, integration and lifecycle management,” according to the Health AI Partnership’s mission statement.
“These systems can embed systemic bias into care delivery, vendors can market performance claims that diverge from real-world performance, and the software exists in a state with little-to-no software best practice guidance.”
Without an effort to boost discernment and understanding about the types of AI software that are proliferating across the space – and a focused effort to set standards for how staff should be trained, and how new technologies should be supported and maintained – there’s the risk that “innovation in this space will remain compromised, with real risk of harm to clinicians and patients,” according to the experts from Duke, Mayo Clinic and UC Berkeley.
Those health systems – and eventually, hopefully, others – want to help build out an ecosystem of different stakeholders to improve the way AI tools are acquired and deployed in healthcare provider settings.
Toward that end, researchers will develop an online curriculum to help educate IT leaders and will work with stakeholders such as target users, regulators, policymakers, payers and more.
“We hope to get this to a point where it’s no longer just kind of an ad hoc process, where an inbound email from a sales rep can get something adopted without being taken through a more standardized, evidence-driven process.”
Mark Sendak, Duke Institute for Health Innovation
AI and machine learning tools in healthcare are changing fast, and their adoption across healthcare has “far outpaced the maturation of the ecosystem to ensure the safe, effective and responsible use and diffusion of these technologies,” these experts say.
“The low barrier to entry for new-product development coupled with significant public and private funding has created compromised market conditions where many healthcare delivery settings are not properly equipped to conduct procurement, integration and lifecycle management activities.”
This new collaboration – billed as the first of its kind – wants to help fix that. The guidance and curriculum developed by the Health AI Partnership will be open source and available online. The project – the first phase of which leaders hope to achieve within the next 12 months – “will be gradual and will require significant investment and public sector intervention.”
Setting standards for quality, efficacy
I spoke recently with Dr. Mark Sendak, population health & data science Lead at Duke Institute for Health Innovation, who has helped spearhead the new initiative. He offered his take on what it all means – and where he and other leaders from Mayo Clinic, UC Berkeley, DLA Piper and other organizations hope to take it.
(In addition to participating in the aforementioned panel discussion Tuesday afternoon, Sendak and I will also be participating in a live and interactive “Digital Dialogue,” focused on the partnership and other aspects of healthcare AI’s ongoing evolution, on Wednesday, from 12:50-1:10 p.m.)
“Right now, there is a glut of solutions that are being sold, and being implemented, that are of questionable quality,” said Sendak.
Oftentimes, many healthcare stakeholders just assume there’s not much to be done about that and assume that it’s the FDA’s job to deal with that, he said. “They point to regulators, saying that this is their problem.”
But there’s a lot of smart people in healthcare, and many organizations have the discernment and expertise to scrutinize drugs, say, or labs. The same can be done for AI software, Sendak explained.
We do believe that health systems can and need to build these capabilities internally. And that’s where we hope to get this – to a point where it’s no longer just kind of an ad hoc process, where an inbound email from a sales rep can get something adopted without being taken through a more standardized, evidence-driven process.”
The goal of the new partnership, he said, is to “decentralize the high concentration of technology [and] regulatory expertise” from some leading health systems across the U.S. and use it to develop guidelines to help other organizations make smarter decisions.
Duke has been in touch with Mayo Clinic for about 18 months building this initiative, said Sendak. “They’re one of our peer institutions that does a lot of internal technology development.”
At both organizations, “not only are we building and implementing these things internally, but we also are commercializing these, and we have a long history of external partnerships to commercialize technologies,” he explained.
Sendak’s Duke Health colleagues, Suresh Balu, associate dean for innovation and partnerships, and Michael Gao, senior data scientist, are also leaders at the new Health AI Partnership.
Working alongside Mayo Clinic, “we, in parallel, had been putting together the internal processes for how to develop, evaluate these types of algorithms, and to inform not just IT decision-makers but also clinical leaders for appropriate use of AI software.”
David Vidal, director of regulation at the Mayo’s Center for Digital Health, and Mark Lifson, SaMD systems engineering manager at the center, have been key players on the project.
“We had reached very similar conclusions and approaches for how to develop and integrate AI software in clinical practice,” Sendak said of Duke and Mayo.
Meanwhile, two professionals from UC Berkeley bring a different set of expertise. Deirdre Mulligan, professor, in the university’s School of Information, and Deb Raji, a Ph.D. candidate there, offer a “wealth of experience, looking at different types of technologies, different industries and thinking critically about communication, about training, about regulation,” he said.
“AI software being adopted as commercial products in healthcare is new,” he explained. “So we want to make sure that we’re learning from other industries, other experiences, as well. I think that we all can agree that there’s been some pretty massive missteps in other industries for how technology has been developed and adopted, and we don’t want to neglect those learnings.”
Law firm DLA Piper rounds out the partnership’s founding members. Its global lead for AI practice, who also happens to be a physician and software entrepreneur, is Dr. Danny Tobey.
He and the firm “have been leaders in the field for many years now, helping commercial entities get adoption and navigate the regulatory process” said Sendak.
So the first priority, starting in early 2022, is information gathering – accomplished with a series of interviews and observations within Duke and within Mayo Clinic and then across seven other sites comprising academic medical centers, community hospitals and payer organizations.
The group wants to learn about key best practices and challenges faced by care delivery organizations in deploying healthcare AI, Sendak explained, with an early focus on three areas: procurement, integration and lifecycle management.
“This may involve folks involved in the technology purchasing part of the organization,” he said.
At first, the team will be examining how decision-making is made around other areas.
“We’re trying to look at how lifecycle management programs are run for pharmaceuticals; there’s a long history of the antimicrobial stewardship programs of laboratory quality management programs, pharmacy and therapeutic committees,” said Sendak.
“In each of these settings, [we’re interested in] seeing how those teams are staffed, how they make decisions, who the stakeholders are – because we want to leverage existing capabilities and processes that exist within health systems,” he said.
AI procurement, ideally, “should fit within the mental model of how healthcare organizations do these other types of procurements and management,” he said.
By midyear, the hope is to have some defined priorities for how the curriculum should be developed. And then, at about nine months in, to create that curriculum and do a first round of user testing.
“We’re going to put material online that will be freely available, do a request for comments, so the goal is that a health system will identify internally which stakeholders they want to participate in the curriculum, and that will ultimately be the decision-making body for procurement integration and lifecycle management,” said Sendak.
“After we get that feedback and do the next round of iteration, we would have a set of material, and we would know what the next steps are in terms of really strengthening this and getting this to a place where potentially there’s going to be partnerships with professional societies, some type of more official certification.”
Groups such as the Joint Commission, with its site visits and hospital certification, “may take interest in this,” he added. “There may need to be dialogue with CMS around reimbursement for different types of algorithms and how those algorithms are adopted. Or conversations with the FDA. We expect a lot of stakeholder engagement to happen during the 12 months.”
Sendak emphasizes that this whole endeavor is “not meant to be an endorsement of any specific product. It’s meant to be agnostic of product, agnostic of disease state. This is meant to be a general framework.”
That said, the initiative is focused on two specific subsets of healthcare AI.
“One type of AI is around diagnosis and treatment decisions: individual, patient-level diagnosis and treatment,” he explained.
“The other is prioritization for some type of resource or program. You may be familiar with Ziad Obermeyer’s work a few years back around prioritization of patients for care management programs. It’s that second bucket that we already know there needs to be a lot of professional development, curriculum guidance for how to scrutinize those algorithms.”
With those two use cases in mind, “our goal is that by the end of the 12 months, we’re in a place where there is a much stronger consensus and mutual understanding of what are the important questions to ask, what type of information needs to be gathered during procurement integration, lifecycle management,” said Sendak.
While Sendak is hopeful this process will eventually “shape the practices of technology developers,” he said the decision was intentional to “start supporting the needs of the health system that are making these decisions.
“Right now, we do not believe that folks have the information that they need to be making informed procurement, integration and lifecycle management decisions,” he said. “The goal is that people can make informed decisions. But our hypothesis is that to make informed decisions, you have to have a baseline level of understanding and of expertise.”