Reverberations of Shadows Speak to Us: In Search of a World Outside of Data December 2025
AI has a structural limitation in that it learns only the data it is given and cannot understand the reality that exists outside of it. Particularly when dealing with data that retains historical bias, there is a danger that the bias will be learned as fact for the AI, and past inequities will be reproduced directly in future judgments. While humans can recognize and correct for the presence of context and discrimination, AI cannot understand background circumstances and ethical standards, and cannot handle values and voices that were not recorded in the data. This inability to recognize this void is at the heart of the bias problem.
Experiences not captured in data, excluded voices, and the realities of historically overlooked people are not incorporated into the model, and AI continues to make judgments based on an incomplete picture of the world. As a result, discrimination and inequity are fixed, and the risk of the socially vulnerable being disadvantaged again increases. Internationally, this problem is also being taken seriously. In the U.S., a case in which a judicial AI unfairly judged a black defendant as high-risk has attracted attention, and the EU AI Act has made bias verification for high-risk AIs mandatory. The research field is seeking counterfactual explanations to compensate for missing reality and methods to quantify structural discrimination, but human ethical intervention is essential for fundamental solutions.
How to scoop up the shadow reality that AI cannot handle? This is the key to preserving fairness and trust in the age of AI.
No comments:
Post a Comment