How Insurance Companies Use AI to Deny Your Claims

How Insurance Companies Use AI to Deny Your Claims

Your insurer probably does not have a person reviewing your claim. It has a model. And that model was trained, at least in part, on data about what previous claims got denied. If that sounds like a system designed to say no, that is because, functionally, it is.

AI in insurance is not a future concern. It is happening now, across health, auto, home, and life coverage. The pitch from insurers is efficiency and objectivity: computers don't get tired, don't play favorites, process thousands of claims in seconds. All of that is true. The part they leave out is that computers also don't explain themselves, don't care about your specific situation, and will happily automate discrimination if discrimination is baked into the training data.

Two Ways AI Gates Your Coverage

A 2024 research overview by van Bekkum and Borgesius identifies two dominant patterns in how insurers now deploy AI. The first is data-intensive underwriting: using vast new data sources, from credit scores to shopping behavior to social media activity, to decide who gets coverage and at what price. The second is behavior-based insurance: continuous real-time monitoring, like telematics in your car, that adjusts your premiums based on how you live.

Both patterns produce the same structural problem. The more data points a model ingests, the harder it becomes for anyone, including regulators, to trace why a specific decision was made. The insurer gains precision. You lose the ability to understand, let alone contest, what happened to your application.

Removing Race from the Model Does Not Remove Racism

The intuitive fix sounds simple: strip out protected characteristics like race, gender, and age, and the model becomes fair. A 2022 actuarial study by Lindholm, Richman, and Tsanakas shows why this does not work. Algorithms can reconstruct proxies for protected characteristics through correlated variables. Your zip code predicts your race. Your credit score correlates with income, which correlates with race. Your purchase history, your commute distance, your device type all carry demographic signal. You can scrub the label off the variable and the discrimination survives.

This is called indirect discrimination, and it is largely unaddressed by current law. The model is technically compliant. The outcomes are not neutral.

A 2025 survey by Borgesius and van Bekkum found that consumers actually draw a sharp line here. People broadly accept that an insurer might charge more if you drive recklessly, because that's a behavior you control. They broadly reject the idea that an insurer should infer your risk profile from your social media posts or location data. The public has a coherent moral intuition about this. Current industry practice ignores it.

Training AI on Denial Patterns

This one is worth reading twice. A 2020 paper by Kim and Sridharan describes a deep learning system called Deep Claim, built to predict whether a claim will be denied by a payer. The model learns from historical denial data. It then predicts future denials.

The stated purpose is to help providers anticipate rejections so they can fix submissions before sending them. The structural reality is darker. A model trained on historical denials encodes whatever patterns drove those denials, including errors, biases, and deliberate prior-authorization bottlenecks. Scale that model across millions of claims and you scale every flaw in the original dataset. The tool that was framed as helping providers navigate the system is also a blueprint for automating the system's worst habits.

Algorithms That Override Your Doctor

Research by Choudhury in 2018 looked at predictive analytics in post-acute care, specifically the practice of using AI to pre-determine how long a patient should stay in a facility before insurance authorization expires. The algorithm reviews data before care is complete and recommends discharge timelines. The insurer uses that recommendation to cut authorization.

Your clinician might think you need three more days. The model says you don't. The model wins, because the authorization expires and someone has to pay out of pocket if you stay. This is not a hypothetical. Hospitals and rehab facilities operate under these constraints today.

Cynthia Rudin's 2018 paper on black-box machine learning makes the core argument plainly: opaque models should not be used for high-stakes decisions. Not because they are always wrong, but because when they are wrong, nobody can figure out why. Affected individuals cannot challenge decisions they cannot see. Doctors cannot advocate for patients when the denial comes from a model that produces no reasoning. Regulators cannot audit a system that offers no audit trail. Research on ethics-based auditing by Mokander and Morley confirms that even formal audits of automated decision systems face serious limits when the system itself is a black box.

What You Can Actually Do

Knowing this doesn't give you a magic override code. But it changes what you should do when a claim gets denied.

Request a written explanation of the denial with specific reasons. Insurers in most jurisdictions are legally required to provide one. If the explanation is vague or circular, that is itself useful information: vague denials are harder to defend in an appeal or complaint.

File an appeal, in writing, and include clinician documentation. Automated systems flag appeals for human review more often than they flag first-pass denials. A human reviewer operates under different pressures than a model.

File a complaint with your state insurance commissioner if you believe the denial was wrongful. Regulators are starting to pay attention to AI in insurance, and documented complaints build the public record that forces policy responses. The growing use of agentic AI in underwriting means the stakes will only increase as these systems gain more autonomy.

The industry's argument is that AI makes decisions faster and cheaper. Both are true. Neither is a reason to accept decisions that are wrong, opaque, or discriminatory. Speed and cost savings benefit the insurer. The accountability gap falls on you.