Misplaced Trust? Rethinking the Reliability of Algorithms in Decision-Making
- Kafico
- May 16
- 1 min read
Public services are increasingly turning to algorithms for decision-making, trusting in their speed, consistency, and supposed neutrality. But as legal scholar Guido Noto La Diega argues, this trust can be dangerously misplaced (Noto La Diega, 2023).
Consider the UK case where a software flaw in the Form E divorce calculator potentially miscalculated financial settlements for 20,000 couples. The error went unnoticed, cloaked in the appearance of mathematical precision. No red flag, no human instinct to double-check, just a silent, systemic failure.
Noto La Diega contends that while humans are imperfect, their decisions are socially embedded. We tend to emulate each other, making human judgment more predictable, accountable, and open to challenge. Algorithms, by contrast, may fail silently, and their logic can be opaque or unchallengeable to the people affected.
Trust in automation must be earned, not assumed. Systems used in recruitment, law, or healthcare must be transparent, reviewable, and subject to human oversight. If fairness matters, then the goal shouldn’t be to replace human judgment but to support it, ensuring that decisions remain understandable, contestable, and morally coherent.
Trust better, not blindly.
Reference:Noto La Diega, G. (2023). Law and Technology: Untrustworthy Machines? In: Fuster, G.G., & Zuiderveen Borgesius, F. (eds.) Data Protection, Artificial Intelligence and Human Rights. Oxford: Hart Publishing.
Comentários