When to Question the Measure of Life Where Objective Functions Quietly Draw the Line December 2025
When we entrust an AI to make decisions, the objective function is central. It is the criterion that determines what to maximize and what to minimize, but its setting involves deep value judgments rather than neutral mathematics. For example, the objective function of maximizing the number of years of life saved in a medical algorithm appears to be rational and suitable for efficient use of resources. However, this measure can be structured to give priority to the young, leaving the elderly and chronically ill behind. Furthermore, in a society where health status is influenced by living conditions, race, and income, the goal of maximizing healthy years may itself work to increase inequity.
In a case study of a U.S. health insurance algorithm that used medical expenditures as a proxy for health demand, a bias occurred in which black patients were determined to have lower medical needs than they actually did. This was because the amount of spending reflected differences in access to health care rather than health status. This is a classic example of inequity being reinforced through AI, as the objective function was combined with an inappropriate proxy indicator.
Against the backdrop of these problems, the EU AI Act and the OECD AI Principles make adequacy of purpose an important requirement. The choice of what to maximize is an ethical judgment of which value society prioritizes, and the fairness and safety of AI can already be oriented at the objective setting stage. The objective function is not merely a mathematical formula, but a mirror of society's values and future priorities, and it is essential to be willing to carefully examine its setting.
No comments:
Post a Comment