Regulating Proxy Discrimination in Insurtech

Hold on tight–AI will create fundamental challenges to Insurance Law, as well as the norms that guide it 

Anya Prince is an associate professor at the University of Iowa, and Daniel Schwarcz is the Fredrikson & Byron Professor of Law at the University of Minnesota.

Like many other domains of finance, insurance is being fundamentally transformed by Artificial Intelligence (“AI”).  AI holds the potential to alter virtually every aspect of the insurance business, from underwriting to claims payments to policy design.  At the same time, the insurance AI revolution poses countless challenges for a state-based system of regulation that evolved in an era of comparatively straight-forward insurance technologies.  

Perhaps the most significant of these challenges involves when and how to regulate insurance discrimination in the age of AI.  Reflecting the salience of these issues, prominent newspapers like the Wall Street Journal and New York Times have warned that insurers are starting to use information from social media posts to price coverage.  In response, at least one state insurance department – New York’s Department of Financial Services – recently released guidance about when and how life insurers can use “big data” to discriminate among applicants.  Meanwhile, the National Association of Insurance Commissioners – the most powerful entity in state insurance regulation – recently established a Big Data and Insurance working group to study and develop best practices for states’ review of rating techniques that rely on AI. 

Despite these efforts, state regulators have, to date, failed to clearly define, much less confront, the most important problem that AI poses for traditional insurance regulation: the risk of algorithmic proxy discrimination.   Proxy discrimination by AI occurs when two conditions are met.  The first of these conditions is straight-forward: the AI must discriminate on the basis of a facially-neutral characteristic that disproportionately harms members of a legally protected class.  By contrast, the second ingredient of proxy discrimination is typically ignored or misunderstood: the capacity of the facially neutral practice to help predict the AI’s programmed objective must derive from the fact that it proxies for membership in the legally protected class. For this to happen, the protected trait must be predictive of the outcome of interest, thus making its use ‘rational’.  In the absence of this second ingredient, discrimination by AI simply amounts to classical “disparate impact” (which states have long treated as non-actionable in insurance) rather than the more specific phenomenon of “proxy discrimination.”

This definition of proxy discrimination is well illustrated by considering a life insurance AI that produces higher rates for applicants who have visited the website of an organization that provides genetic testing for mutations associated with certain cancers.  Such discrimination would meet both elements of the definition of proxy discrimination.  First, the AI would be discriminating on the basis of a facially neutral characteristic (website visits) that disproportionately harmed members of a protected class (those with an identifiable genetic mutation).  Second, applicants’ website visits would help the AI predict future claims because of its capacity to proxy for deleterious genetic information. Framing this second point in econometric terms, data on applicants’ website visits would cease to be predictive of future insurance claims in a model that controlled for applicants’ genetic predispositions to cancer.  

Proxy discrimination by insurance AIs represents a fundamental regulatory challenge because it undermines the central goals of insurance anti-discrimination laws.  State and federal laws frequently bar insurers from discriminating on the basis of particular protected traits even though they may be predictive of future claims.  Examples include laws prohibiting discrimination by insurers against individuals with preexisting health conditions, or deleterious genetic information, or a history of being victimized by domestic violence.  All of these laws are intended to ensure that affordable insurance is available to individuals falling into these protected categories.  Proxy discrimination undermines this objective by reproducing the outcomes that would obtain if insurers intentionally discriminated against protected, albeit “risky,” groups of policyholders. 

As AIs become even more sophisticated, proxy discrimination will represent an increasingly fundamental challenge to insurance law, as well as all other antidiscrimination regimes that seek to prohibit “rational discrimination.”  This is because AIs armed with big data are inherently structured to engage in proxy discrimination whenever they are deprived of information about membership in a legally-suspect class that is genuinely predictive of a legitimate goal, like minimizing insurance claims.  Simply denying AIs access to direct information on membership in legally suspect classes or the most intuitive proxies for this information does little to thwart this process; instead it simply causes AIs to locate less intuitive proxies.

For these reasons, insurance antidiscrimination law must adapt to combat proxy discrimination in the age of AI and big data.  States could pursue a variety of strategies to achieve this objective.  For instance, they could require insurers to collect data about the impact of discrimination by AI on coverage and rates for protected classes of policyholders.  Alternatively, they could require insurers to develop AIs that isolate only the predictive power of non-suspect variables.  Yet another option would be to flip the default approach to anti-discrimination law, such that all forms of discrimination are prohibited except those that are specifically allowed based on evidence regarding the risk of proxy discrimination, a solution that has been employed by the Affordable Care Act and some state auto insurance regimes.  But whatever the solution, states that ignore the problem of proxy discrimination in insurance will increasingly find that protected, but high-risk, policyholders will suffer as AIs become more ubiquitous and powerful, leading to the very harms that insurance anti-discrimination laws were originally designed to avoid.  

Want to learn more?  See Prince and Schwarcz’s article, Proxy Discrimination in the Age of Artificial Intelligence and Big Data, forthcoming in the Iowa Law Review

Artwork Credit:  GDJ for Openclipart

About The Contributor: admin
Tell us something about yourself.

Get involved!

Get Connected!

Come and join our community. Expand your network and get to know new people!

Comments

No comments yet