Computer code and text displayed on computer screens

Two skilled nursing patients filed a class action suit against Humana Tuesday — alleging their Medicare Advantage benefits were cut prematurely short because of Humana’s wrongful reliance on artificial intelligence.

The lawsuit alleges that Humana regularly overrides clear post-acute care needs and the recommendations of doctors in order to cut costs. 

“This putative class action arises from Humana’s illegal deployment of artificial intelligence (AI) in place of real doctors to wrongfully deny elderly patients care owed to them under Medicare Advantage Plans,” the suit states. “Humana knows that the nH Predict AI Model predictions are highly inaccurate and are not based on patients’ medical needs but continues to use this system to deny patients’ coverage.”

The lawsuit was filed Tuesday in the US District Court in Western Kentucky. It is the newest front in a war over the future of AI in healthcare. The courts’ decision could help clear the way for AI to remain on its current trajectory in the sector or force insurers and providers to take a hard look at how this emerging technology impacts patients.

AI is increasingly utilized to streamline time-consuming processes in the healthcare sector, but the technology has drawn scrutiny from policymakers and the public — especially when it is used to make healthcare decisions that would typically be made by humans.

The Senate Committee on Homeland Security and Governmental Affairs sent a letter in May to demand that the nation’s largest Medicare Advantage insurers — including Humana — provide more transparency about how they use AI in such care decisions.

Demands for transparency and accountability continued to build through November, when 30 House Democrats urged the Centers for Medicare & Medicaid Services to investigate the ways algorithms are being deployed in the healthcare sector. 

The same month, UnitedHealth Group was handed a similar class action lawsuit for using the same nH Predict AI algorithm as used by Humana.

Denial of post-acute coverage

JoAnne Barrows and Susan Hagood, the plaintiffs, were admitted to skilled nursing facilities for post-acute care and rehab after injuries they sustained in 2021 and 2022, respectively. 

Both were receiving ongoing care following a hospital stay, which should have been covered under their plans, according to the lawsuit. 

“Under Medicare Advantage Plans, patients who have a three-day hospital stay are typically entitled to up to 100 days in a nursing home. With the use of the nH Predict AI Model, Humana cuts off payment in a fraction of that time. Patients rarely stay in a nursing home more than 14 days before they start receiving payment denials,” the suit asserts.

Barrows’ doctor was allegedly still recommending further rehabilitation treatment and for Barrows to keep off her feet for another month at the time her coverage was cut.

“Ms. Barrows and her doctor were bewildered by Humana’s premature termination of coverage,” the lawsuit said. 

Claims of wrongdoing

The class action suit lays blame for the plaintiffs’ coverage denial squarely on Humana for the way it utilizes AI algorithms and for undermining doctors’ and employees’ ability to consider the nuances of a patient’s case. 

“Humana wrongfully delegates its obligation to evaluate and investigate claims to the nH Predict AI Model. The nH Predict AI Model spits out generic recommendations based on incomplete and inadequate medical records and fails to adjust for a patient’s individual circumstances,” the lawsuit alleges.

It also asserts that Humana employees are reprimanded or fired for going against the AI’s recommendations and that this constitutes purposeful misconduct by the insurer.

Mark Taylor, director of corporate communications at Humana, said that Humana does not comment on pending litigation but did provide a statement on the company’s use of AI to McKnight’s Long-Term Care News.

“At Humana, we use various tools, including augmented intelligence, to expedite and approve utilization management requests and ensure that patients receive high-quality, safe and efficient care,” the comment said. “By definition, augmented intelligence maintains a ‘human in the loop’ decision-making whenever AI is utilized. Coverage decisions are made based on the health care needs of patients, medical judgment from doctors and clinicians, and guidelines put in place by CMS. It’s important to note that adverse coverage decisions are only made by physician medical directors.”

It’s unclear how many people have been impacted by alleged misuse of AI, but the lawsuit claims that thousands or possibly millions of people might be eligible for payment if the case goes the plaintiffs’ way, due to how widespread the use of AI already is.