Authors:
Paper:
https://arxiv.org/abs/2408.06736
Introduction
In the modern world, algorithms play an increasingly significant role in our daily lives, from recommending entertainment to making critical decisions such as loan approvals and criminal sentencing. This growing reliance on algorithms brings with it numerous risks, ranging from minor irritations to severe injustices and catastrophes. Despite these risks, society continues to embrace “algorithm appreciation,” often trusting automated systems over human judgment. This paper explores the importance of incorporating risk and uncertainty into AI systems to address ethical concerns and ensure that algorithms make humane decisions.
The Numbers of the Future
Charles Babbage, the inventor of the difference engine, was once asked if incorrect inputs would yield correct outputs. His confusion at the question highlights a fundamental issue with modern algorithms: they often require precise, contextless inputs, which can lead to errors. For instance, simple unit errors in medical calculations can have fatal consequences. By incorporating context and uncertainty into their calculations, algorithms can “sense check” results and block incorrect calculations, thereby reducing harm and improving trustworthiness.
Uncertainty
Uncertainty in AI can be categorized into two types: aleatory uncertainty, which arises from natural variability, and epistemic uncertainty, which stems from a lack of knowledge. While probabilistic methods are often used to address uncertainty, they may not fully capture the lack of information. For example, risk-averse algorithms may unfairly reject loan applications from minority groups due to a lack of data, a phenomenon known as uncertainty bias. Allowing users to express uncertainty in their inputs can help mitigate such biases and ensure fairer decisions.
Provenance
Provenance refers to the origin and ownership history of an algorithm or dataset. Knowing the provenance of a medical decision support tool, for example, can help assess its trustworthiness. Provenance can also help detect misinformation, such as spurious claims about COVID-19 on social media. Generative AI systems, which can produce deep-fake images and hallucinate incorrect information, further underscore the importance of provenance. By expressing uncertainty about provenance, algorithms can highlight their trustworthiness and avoid generating harmful misinformation.
Uncertainty Ex Machina
Is the Algorithm Correct?
Assessing the correctness of an algorithm involves both empirical and moral considerations. Empirical accuracy can be measured using various statistics, but these metrics may not fully capture the nuances of decision-making, especially in high-risk situations. Moral correctness is harder to define but is crucial for ensuring fairness and preventing harm. Explainability is another important aspect, as it helps users understand why an algorithm made a particular decision. Allowing algorithms to express epistemic uncertainty can provide additional avenues for verifying their correctness and appealing their decisions.
Cede to the Algorithm?
Defining the human-algorithm relationship is crucial, especially in high-risk scenarios. There are numerous examples of algorithms contributing to disasters or injustices, either by being bypassed or trusted blindly. For instance, the Smiler rollercoaster accident occurred because operators bypassed a safety algorithm, while the Air France Flight 447 crash happened after the autopilot disengaged due to uncertain speed information. Trusting algorithms to say “I don’t know” when faced with uncertainty can help prevent such incidents by providing a framework for human intervention.
Conclusions
To develop humane algorithms, it is essential to design systems that work harmoniously with humans, avoiding tyrannical or harmful behavior. Algorithms should produce equitable and fair outcomes by recognizing user diversity, accepting varied human inputs, and being aware of the context and provenance of the data they use. Transparency and explainability are crucial for verifying outputs and ensuring correctness. Trustworthiness in control requires algorithms to accept and relinquish control in ways that humans find workable, incorporating fail-safe measures to prevent disasters. Properly handling uncertainty in high-risk scenarios is key to developing humane algorithms.
Acknowledgements
This paper has benefitted from discussions with many people, including Daniel Joyce, Dominic Calleja, Enrique Miralles-Dolz, Adolphus Lye, and Alexander Wimbush. Special thanks to Scott Ferson for proofreading a draft of the manuscript.