Using AI in healthcare: separating the hype from the helpful

We’re excited to bring Transform 2022 back in person on July 19 and pretty much July 20-28. Join AI and data leaders for insightful conversations and exciting networking opportunities. Register today


Of all the industries that romanticize AI, healthcare organizations are arguably the most smitten. Hospital directors hope that AI will one day perform healthcare administrative tasks such as scheduling appointments, entering disease severity codes, managing lab tests and patient referrals, and remotely monitoring and responding to the needs of entire cohorts of patients during their daily lives.

By improving efficiency, security and access, AI could be of huge benefit to healthcare, says Nigam Shahprofessor of medicine (biomedical informatics) and biomedical data science at Stanford University and an affiliated faculty member of the Stanford Institute for Human-Centric Artificial Intelligence (HAI).

But warning emptor, says Shah. Buyers of AI in healthcare must consider not only whether an AI model will reliably deliver the right output – which has been the primary focus of AI researchers – but also whether it is the right model for the task at hand. “We have to think beyond the model,” he says.

This means that executives need to consider the complex interplay between an AI system, the actions it will direct, and the net benefit of using AI compared to not using it. And before executives bring an AI system on board, Shah says, they need to have a clear data strategy, a way to test the AI ​​system before buying it, and a clear set of metrics to evaluate whether the AI ​​system is meeting its goals. will achieve that the organization has ensured.

“In implementation, AI should be better, faster, safer and cheaper. Otherwise it’s useless,” says Shah.

This spring, Shah is leading a Stanford HAI executive education course for senior healthcare executives called “Safe, Ethical and Cost-Effective Use of AI in Healthcare: Critical Topics for Senior Leadershipto address these issues.

The business case for AI in healthcare

A recent McKinsey report outlined the different ways in which innovative technologies such as AI are slowly being integrated into healthcare business models. Some AIs will improve organizational efficiency by performing routine tasks, such as assigning billing severity codes. “You can have a human read the card and take 20 minutes to assign three codes, or you can have a computer read the card and assign three codes in a millisecond,” he says.

Other AI systems can increase patients’ access to care. For example, AI systems can ensure that patients are referred to the right specialist and that they undergo important tests before a first visit. “Too often, patients’ first visits to specialists are wasted because they’re told to have five tests and return in two weeks,” Shah says. “An AI system could short-circuit that.” And by skipping these wasted visits, doctors can see more patients.

AI could also be beneficial for health management, Shah says. For example, an AI system can monitor patients’ medication orders, or even guide patients at home in the face of impending decline. So-called hospital-at-home programs may demand more nurses than are available, Shah says, “but if we can place five sensors in the home to provide early warning of a problem, such programs become feasible.”

When to use AI in healthcare?

Despite its widespread potential, there are currently no standard methods for determining whether an AI system will save money for a hospital or improve patient care. “Any guidelines that people or professional associations have given relate to ways to build AI,” Shah says. “There is very little about whether, how or when AI should be used.”

Shah’s advice to executives: Define a clear data strategy, create a plan to try before you buy, and set clear metrics to evaluate whether implementation is beneficial.

Define a data strategy

Because AI is only as good as the data it learns from, executives must have a strategy and staff to collect disparate data, properly label and clean that data, and continuously maintain the data, Shah says. “Without a data strategy, there is no hope for a successful AI implementation.”

For example, if a supplier sells medical image reading software, the purchasing organization must have a substantial set of retrospective data on hand that it can use to test the software. In addition, the organization must have the ability to store, process and annotate its data so that it can continue to test the product in the future to verify that it is still working properly.

Try before you buy

Healthcare organizations should test AI models on their own sites before buying them and making them operational, Shah says. Such tests will help hospitals separate snake oil – AI that doesn’t live up to its claims – from effective AI, and assess whether the model can be properly generalized from the original site to a new one. For example, Shah says that if a model was developed in Palo Alto, California, but is implemented in Mumbai, India, tests should be conducted to determine if the model works in this new context.

In addition to verifying that the model is accurate and generalizable, executives will need to consider whether the model is actually usable when deployed, whether it can be smoothly implemented into existing workflows, and whether there are clear procedures for verifying how well the AI works after implementation. “It’s like a free pony,” Shah says. “There may not be a cost to buy it, but there could be a huge cost to build a shed for it and feed it for life.”

Establish clear metrics for deployable AI

Buyers of AI systems should also evaluate the net benefit of an AI system to help them decide when to use it and when to turn it off, Shah says.

This means thinking about things like the context in which an AI is deployed, the potential for unintended consequences, and the healthcare organization’s ability to respond to AI’s recommendations. For example, if the organization is testing an AI model that predicts readmissions of discharged patients and flags 50 people for follow-up, the organization must have staff available to do that follow-up. If not, the AI ​​system is not helpful.

“Even if the model is built well, given your business processes and your cost structure, it may not be the right model for you,” Shah says.

Ripple effects of AI in healthcare

Finally, Shah warns, executives need to consider the broader implications of AI deployment. Some applications may displace people from long-term jobs, while others may increase human effort in ways that increase access to care. It is difficult to know which impact will come first or which will be greater. And eventually, hospitals will need a plan to retrain and retrain displaced workers.

“While AI certainly has a lot of potential in healthcare,” Shah says, “to realize that potential, organizational units need to be created that manage data strategy, the machine learning model lifecycle, and end-to-end delivery. .of AI in the healthcare system.”

Katharine Miller is a contributing writer for the Stanford Institute for Human-Centered AI.

This story originally appeared on hai.stanford.edu† Copyright 2022

DataDecision makers

Welcome to the VentureBeat Community!

DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.

If you want to read about the very latest ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contribute an article of your own!

Read more from DataDecisionMakers

Leave a Comment

Your email address will not be published. Required fields are marked *