Bias and Discrimination in Clinical Decision Support Systems Based on Artificial Intelligence
Permanent link
https://hdl.handle.net/10037/33423View/ Open
(PDF)
Appendix 1 (PDF)
Appendix 2 (PDF)
Appendix 3 (PDF)
Statement on the use of ChatGPT (PDF)
File(s) with restricted access are under embargo until 2028-04-26
Date
2024-04-26Type
Doctoral thesisDoktorgradsavhandling
Author
Hauglid, Mathias KarlsenAbstract
One of the most promising utilisations of Artificial Intelligence in healthcare is Clinical Decision Support (AI-CDS) systems: AI systems that support clinical assessments and decision-making by producing relevant classifications and predictions that may be relied on by clinicians and patients. However, an oft-cited concern is that AI systems can be ‘biased’ – a nebulous term often used to describe certain undesirable side-effects of AI. There exists a widespread apprehension that the use of AI to aid decision-making might, due to the presence of ‘bias,’ produce results that amount to discrimination.
In response to various risks associated with AI systems, the EU legislature has proposed a common European regulatory framework for AI systems – the Artificial Intelligence Act. While facilitating innovation and trade, the AI Act aims to ensure the effective protection of the safety and fundamental rights of EU citizens, including the right to non-discrimination. Particularly, the proposed AI Act will require that certain preventive measures are taken to ensure compliance with applicable requirements before AI systems can lawfully be deployed in the EU. These preventive measures may, in one form or another, require that discrimination in AI systems must be assessed before deployment.
The assessment of discrimination in an AI-CDS system before its deployment invokes the need for development of appropriate assessment methodologies. The objective of the thesis is to develop certain methodological elements of assessing discrimination in these systems in a pre-deployment setting. More precisely, the thesis sets out to develop considerations, principles, criteria, and methods that ought to be included in a pre-deployment discrimination assessment based on the non-discrimination principle in EU law. As a foundation for the development of these methodological elements, the thesis explores the issue of ‘bias’ in AI-CDS systems and the mechanisms through which equality-related biases may occur in these systems.
Publisher
UiT The Arctic University of NorwayUiT Norges arktiske universitet
Metadata
Show full item recordCollections
Copyright 2024 The Author(s)
The following license file are associated with this item: