This year, executives from nearly every major health insurance company made the same declaration in calls with Wall Street analysts: Using artificial intelligence to make coverage decisions would help save them money.
Even the Trump administration is testing AI’s usefulness in managing the prior authorization process for the Medicare program, as well as seeking to override AI regulation by states.
But class action lawsuits have accused insurers of using AI to wrongfully withhold treatment. And new research from Stanford University outlines the risks of training AI on a current system rife with wrongful denials.
“There is a world in which using AI could make that worse, or at least replicate a bad human system, because the data that it would be training on is from that bad human system,” said Michelle Mello, a co-author of the study.
Although, Mello said, the research team found “real positives alongside the risks.”
In this video produced by KFF Health News’ Hannah Norman, Darius Tahir, a correspondent covering health technology, explains.
You can read Tahir’s recent coverage of AI’s use by health insurers below:
- “Red and Blue States Alike Want To Limit AI in Insurance. Trump Wants To Limit the States,” by Darius Tahir and Lauren Sausser.
- “AI Will Soon Have a Say in Approving or Denying Medicare Treatments,” by Lauren Sausser and Darius Tahir.
By Darius Tahir and Hannah Norman April 10, 2026
Article HTML
We encourage organizations to republish our content, free of charge. Here’s what we ask:
You must credit us as the original publisher, with a hyperlink to our kffhealthnews.org site. If possible, please include the original author(s) and KFF Health News” in the byline. Please preserve the hyperlinks in the story.
It’s important to note, not everything on kffhealthnews.org is available for republishing. If a story is labeled “All Rights Reserved,” we cannot grant permission to republish that item.
Have questions? Let us know at [email protected]

13 hours ago
4
English (US)