As AI transforms how insurers design and price health plans, it also introduces risks that could deepen existing healthcare inequities. With state-level regulation already emerging, insurers need to act quickly, not just to stay compliant, but to ensure their tools are fair and trustworthy.
We explore the unique challenges facing insurers when using AI algorithms to build and calibrate health insurance plans. We’ll first highlight how AI can exacerbate existing healthcare disparities—more so than traditional underwriting methods. Once we’ve gone over the potential risks, we’ll walk through measures insurers could take to avoid increasingly inequitable outcomes and to stay ahead of state guidelines on using AI in insurance.
Untrackable proxies
It’s difficult to track how AI algorithms use and track data. With access to vast data sets insurance carriers have access to, it would be near impossible to pinpoint what data points they might use, how they are using them—and most importantly—whether any data points are being used as proxies for race, gender, age or socioeconomic status.
For example, the Affordable Care Act (ACA) limits how much more insurers can charge older people compared to younger people (no more than 3 times as much). But AI can still find ways to indirectly discriminate against older adults. Longevity algorithms analysing gait speed, mix of medications used and patterns in bloodwork may segment older people into uninsurable risk categories years before conventional actuarial tables would.
There are strict regulations human underwriters and sales representatives must abide by when building and evaluating plans—to ensure they’re not using data points to indirectly discriminate against vulnerable groups. However, the difficulty of tracking how and when AI uses data points would make it near impossible to enforce the same rigour without the appropriate safeguards.
Inequities in historic data
AI systems risk perpetuating and deepening inequities already present in the US healthcare system if they are trained on historic data with no safeguards. While humans have the ability to intervene, understanding social circumstances, AI algorithms left to their own devices build plans based only on learning from historical data.
Low-income individuals frequently exhibit “compressed morbidity”, delaying healthcare until conditions become acute. An AI algorithm might interpret this as higher baseline risk, attributing higher plan prices to individuals in lower-income ZIP codes. A human, on the other hand, has the ability to recognise the access barriers in place and adjust plan build and pricing accordingly.
Health literacy and technological barriers
Health literacy and technological barriers can lead to disparities in AI-generated health insurance plans. Many individuals in the US have low insurance literacy, with over 50% of insured American adults finding health insurance details “very” or “somewhat” difficult to understand. Self-service AI platforms might be too confusing to navigate for many customers, increasing reliance on agents—who may not always be accessible.
Additionally, AI-driven plan recommendations often depend on digital tools, online forms and detailed health information. This can disadvantage people in remote areas with limited internet access or less available health data, widening existing coverage gaps.
As AI algorithms start becoming more prevalent in the insurance plan building space, so will regulations surrounding AI in insurance on both the state and federal levels. Based on some laws that have already been put into effect in certain states, we’ve compiled some steps insurers can take to mitigate the potential discriminatory impacts of AI and ensure they’re ahead of any AI-related regulations coming their way.
- Transparency and consent
Insurers should clearly disclose the use of AI in plan design, and obtain consent from individuals or groups before applying AI algorithms. This will allow insurers to build a trusting relationship with their customers when it comes to the use of AI; forming a strong foundation to make use of AIs efficiency in other insurance contexts.
- Explainability and documentation
Insurers should ensure human representatives can explain what data goes into the AI algorithms, how they’re used and the guardrails in place in plain language. Understanding the basic logic behind the algorithm and decisions it makes will help build trust between customer and insurer.
Detailed documentation of AI model development should be maintained, including data sources, validation processes, and bias mitigation strategies. Keeping an inventory of the AI systems used in plan building and underwriting will be crucial to build a library of materials to reference in case of queries from customers or governmental bodies.
- Monitoring and remediation
AI models should be assessed at certain intervals, either by committees formed within the carriers or by third-parties. Explicitly prohibiting algorithms from using publicly available data that correlates with protected characteristics without proving actuarial justification. For example, an insurer using neighborhood purchasing data must demonstrate it does not indirectly penalize racial minorities through proxy variables like grocery store preferences
Insurers should assess whether AI-driven pricing or underwriting practices result in discrimination against protected classes (race, gender, disability, etc.) by analysing anonymised applicant data to estimate the racial/ethnic demographics impacted by AI decisions. Comparing approval rates and premium differences across groups, with mandatory remediation if disparities exceed a certain amount for protected classes.
- Corrective action
AI models should be frozen ASAP if they are found to cause discriminatory disparities by the committees mentioned in point 3. And insurers should retroactively adjust premiums and coverage for affected policyholders, correcting discrepancies going back 24 months.
As AI models are implemented across the insurance landscape, it is becoming increasingly clear that a hybrid human-AI approach is essential. Human oversight helps interpret nuanced cases, address ethical concerns, and intervene when AI systems produce biased or opaque outcomes. By implementing such measures, insurers can build plans with the efficiency and detail AI has to offer while simultaneously minimising harm, ensuring fairness and regulatory compliance.
At The49, we believe powerful AI should also be responsible AI. We help insurers build smarter, fairer systems that serve both the business and the people who rely on it. Reach out to find out more.