Federal incentives won’t fix AI’s “market failure” in healthcare

In an environment as complex as healthcare, it should come as no surprise that artificial intelligence (AI) technology and the machine learning market are still relatively early in their maturation process. Expecting the market to be further would be something like: assuming a toddler who can do single-digit addition to do calculus as well; we’re just not there yet. Yet.

The authors of a recently STAT+ article titled “A Market Failure Prevents Efficient Distribution of Healthcare AI Software”, argues why the adoption of AI software in healthcare remains limited and what the industry can/should do to advance its implementation in a clinical decision support setting. capacity .

To correct what they consider a “market failure,” the authors provide “a reimbursement framework and policy intervention” to better align AI software adoption with emerging best practices.” Among their observations, the authors state that most AI solutions implemented in hospitals and health systems today are of “dubious” quality, are de facto adopted by existing electronic health records (EHR) and point to high unit economic costs as the cause of the limited adoption of AI software.

But are these factors a market failure? Or is the market functioning exactly as it should?

And, if the EHR Stimulation Program failed in terms of achieving interoperability and led to adverse unintended consequences (which the authors both acknowledge and agree with), should we apply a similar policy plan to AI?

The answer to this last question: No, absolutely not.

No, AI is not a market failure and policy mechanisms will not “fix” it

To drive AI adoption, the authors of the STAT+ paper call for policy intervention and payment incentives. There are a few problems with this argument and their proposed approach to resolving the situation.

First, the authors do not define what a “market failure” is, nor do they claim that AI qualifies as such. A definition of market failure indicates inefficient distribution of goods or services, often because the benefits created are not realized by the buyer. An example of this in healthcare is e-prescribing, a technology that doctors need to adopt but whose benefits are increasing largely to other stakeholders (including pharmacy, payers and patients).

Second, as the authors break down the cost structures (fixed vs. variable) of AI adoption and use, they stop actually quantifying what the unit or instance cost of AI implementation really is. Nor do they quantify the value or public benefit of AI and compare it to its cost — essentially making developing a payback program impossible.

Third, while AI oversight and quality assurance are incredibly important — with many coalitions and public-private partnerships thriving for this very reason — the authors don’t illustrate the damage caused by the lack of AI adoption. (One reason is believed to be because demonstrating and quantifying harm is nearly impossible at this stage of AI development in healthcare and there are few examples documenting the benefits).

Fourth, without assigning value to its implementation, the authors argue for compensation mechanisms for the adoption and use of AI. This would be a continuation of “pay for effort and expense”, not payment for results, an approach that exists under our dominant per-fee payment mechanism. Such an approach has been tried and has proved inadequate for one reason: a payment system based on volume rewards, not results.

Fifth, the authors do not provide a use case specification for how AI policy mandates would be rolled out. Would incentives to start only cover clinical decision support for certain conditions? AI is so incredibly immature that it’s likely there isn’t yet evidence to defend a specific use or capability.

The authors also argue that without a financial incentive program to boost AI adoption, there will be a “digital divide”, with AI adoption and value limited to richer health systems with the resources and structure to take on such investments. . But is that so bad?

Larger, richer systems generally have more financial flexibility to acquire innovative technology and invest in change management programs that naturally have uncertain outcomes. Some of these efforts will fail, especially when untested and unproven (in terms of broad market adoption) technology like AI is adopted; this is part of the broader process by which market forces determine which technologies have merit and which do not, and the process by which the companies offering these solutions find product-market fit.

In other words, bigger, richer systems can afford these kinds of failures; smaller systems do not. The fact that there may be a “digital divide” isn’t necessarily a bad thing if it enables market feedback loops that reduce the risk of poor investment for systems that can’t afford it.

Should AI be treated differently?

The Unintended Consequences of Federal Incentives: Learning from EHR Experience

Finally, the authors argue for a large-scale set of financial incentives for health systems to adopt and use AI.

Unfortunately, providing federal incentives as a policy mechanism is: not well suited for newer technologies and business models that have yet to be proven. One can look to recent experience – to which the STAT authors also refer – to witness the folly of such an endeavour.

The HITECH ACT provided $35 billion in federal incentives to encourage the adoption by physicians and hospitals and “meaningful use” of EHRs. To ensure the integrity of the program and to realize the benefits of EHR adoption, policymakers have instructed the Office of the National Coordinator (ONC) to develop usage requirements that would require doctors and hospitals to receive the incentives. This put ONC in a position to predict the future of how physicians would use EHRs and create value. Not surprisingly, their best guesses 10 years ago proved unpredictable. This is not a knock on ONC, but an admission that few of us can accurately predict the future, especially when it comes to immature technology that is likely to evolve significantly in the coming years.

Finally, the STAT+ authors themselves acknowledge that an unintended consequence of the EHR Incentive Program (part of HITECH) was that “EPD vendors turned this windfall of taxpayers’ money into a barrier to entry,” which they in turn use to develop their own AI solutions. to promote. They don’t seem to consider that another federal incentive program could result in a windfall for AI vendors that raise their own barriers to entry.

Yet this is what the authors of STAT+ suggest for an AI stimulation program.

The reality is that as new developments in the application of AI in healthcare take place and lessons are learned, the federal government is ideally suited to implement such an incentive program. It’s too slow to keep up with the pace of innovation in AI, yet too big to fail. Such inevitable market failures, new technological developments and lessons learned are better left to individual AI companies and health systems.

The best example of subsidized health IT adoption the right way is e-prescribing. Federal incentives to promote e-prescribing adoption from 2009 have been a notable success, and by 2010 40% of doctors those who had adopted did so in direct response to the program. The market – and competitive landscape – for e-prescribing grew in large part because e-prescribing was an established technology, there were standards to ensure interoperability between doctors and pharmacies, an ecosystem and network infrastructure already existed, and studies had been done to demonstrate the benefits.

For e-prescribing, the value of the technology was already proven. We are not there yet for AI.

If there is value, the market will find it. So what role should government play?

Since the EHR Incentive Program is $35 Billion failure reinforced, health IT adoption is not something that can or should be solved by policy intervention alone – especially when a technology is so immature.

Perhaps there are roles for the government. As an industry covenant, it can engage industry, technology, and academic experts to train agencies and provide standard recommendations to address policy and technical issues faced by AI developers and implementers. As the nation’s largest payer (CMS), the government can encourage adoption once standards are established and use cases have proven their worth by linking incentives to reimbursement; alternatively, increasing the proprietary use of value-based payment systems creates the conditions under which health systems will naturally adopt AI proven to improve the quality of care and outcomes.

In addition, the authors of the STAT+ article argue that the Joint Commission, a nonprofit responding to standards-setting and accreditation, has a role to play in the validation and monitoring of AI software. This is indeed a good idea, one that is being played by a private and reputable organization.

If AI delivers enough value, the market must and will find that value. But if not, the government shouldn’t be responsible for guiding AI adoption through funding and payment mechanisms, especially using the previous HITECH incentive framework as a starting point.

Leave a Comment

Your email address will not be published.