Algorithmic and Non-Algorithmic Fairness: Should We Revise our View of the Latter Given Our View of the Former?
Permanent lenke
https://hdl.handle.net/10037/35775Dato
2024-10-24Type
Journal articleTidsskriftartikkel
Peer reviewed
Forfatter
Lippert-Rasmussen, KasperSammendrag
In the US context, critics of court use of algorithmic risk prediction algorithms have argued that COMPAS involves unfair machine bias because it generates higher false positive rates of predicted recidivism for black offenders than for white offenders. In response, some have argued that algorithmic fairness concerns, either also or only, calibration across groups–roughly, that a score assigned to different individuals by the algorithm involves the same probability of the individual having the target property across different groups of individuals–and that, for mathematical reasons, it is virtually impossible to equalize false positive rates without impairing the calibration. I argue that in standard non-algorithmic contexts, such as hirings, we do not think that lack of calibration entails unfair bias, and that it is difficult to see why algorithmic contexts, as it were, should differ fairness-wise from non-algorithmic ones in this respect. Hence, we should reject the view that calibration is necessary for fairness in an algorithmic context.
Forlag
Springer NatureSitering
Lippert-Rasmussen. Algorithmic and Non-Algorithmic Fairness: Should We Revise our View of the Latter Given Our View of the Former?. Law and Philosophy. 2024Metadata
Vis full innførselSamlinger
Copyright 2024 The Author(s)