Monday, December 8, 2025

False Shadows Created by Proxies When Proxy Indicators Distort the World December 2025

False Shadows Created by Proxies When Proxy Indicators Distort the World December 2025
Proxy bias is a bias created by using another indicator that approximates instead of what should be measured; AI cannot understand the social context behind proxy indicators and learns the numbers as they are, thus risking magnifying past biases and inequalities. A typical example is the U.S. recidivism prediction model COMPAS, which uses arrest history as an indicator instead of criminal propensity, creating a structure that makes blacks susceptible to unfairly high risk ratings.
In the financial sector, the use of ZIP code as a proxy for credit risk has been reported to cause regional disparities to be directly translated into individual evaluations. In the healthcare field, an algorithm that uses healthcare expenditures as a proxy for health demand has contributed to racial disparities. In each of these cases, the reason was that the proxy indicator did not correctly represent the target variable.
The EU AI Act and the OECD AI Principles emphasize the causal validity and relevance of data and treat the use of inappropriate proxy indicators as a serious risk. The essence of proxy bias is the wrong choice of indicators, and defining the concept and determining the validity of the data is the foundation for fair AI design.

No comments:

Post a Comment