CPH Tech Policy Brief #6

Algorithmic fairness: Learnings from a case that used AI for decision support

This CPH Tech Policy Brief is based on a working paper by Vedran Sekara, Therese Moreau Hansen and Roberta Sinatra. The brief provides a small introduction to algorithmic fairness and an example of auditing fairness in an algorithm which was aimed at identifying and assessing children at risk from abuse.

Algorithmic decision-making systems are increasingly adopted by governments and public service agencies to make life-changing decisions. However, scientists, activists, policy experts, and civil society have all voiced concern that such systems are deployed without adequate consideration of potential harms, biases, disparate impacts, and accountability.

Taking its point of departure in a single case from two Danish municipalities, in which two of the three authors of this brief helped unearth a potentially risky and harmful use of algorithmic decision-support in the placement of children, this policy brief aims to explain and contextualize central issues around algorithmic fairness, bias, and auditing.

It is crucial to ensure that algorithmic systems work as intended, and work fairly. One vital type of check is to ensure that algorithms do not discriminate against any individuals, groups, or populations. Ensuring that algorithms produce ‘outputs’ of equitable quality, accuracy, and utility for different groups (e.g. men and women, old and young, abled and disabled, etc.), and for different intersections of them, is called algorithmic fairness.