dc.contributor.advisor | Mahler, Tobias | |
dc.contributor.author | Hauglid, Mathias Karlsen | |
dc.date.accessioned | 2024-04-22T09:40:23Z | |
dc.date.available | 2024-04-22T09:40:23Z | |
dc.date.embargoEndDate | 2028-04-26 | |
dc.date.issued | 2024-04-26 | |
dc.description.abstract | One of the most promising utilisations of Artificial Intelligence in healthcare is Clinical Decision Support (AI-CDS) systems: AI systems that support clinical assessments and decision-making by producing relevant classifications and predictions that may be relied on by clinicians and patients. However, an oft-cited concern is that AI systems can be ‘biased’ – a nebulous term often used to describe certain undesirable side-effects of AI. There exists a widespread apprehension that the use of AI to aid decision-making might, due to the presence of ‘bias,’ produce results that amount to discrimination.
In response to various risks associated with AI systems, the EU legislature has proposed a common European regulatory framework for AI systems – the Artificial Intelligence Act. While facilitating innovation and trade, the AI Act aims to ensure the effective protection of the safety and fundamental rights of EU citizens, including the right to non-discrimination. Particularly, the proposed AI Act will require that certain preventive measures are taken to ensure compliance with applicable requirements before AI systems can lawfully be deployed in the EU. These preventive measures may, in one form or another, require that discrimination in AI systems must be assessed before deployment.
The assessment of discrimination in an AI-CDS system before its deployment invokes the need for development of appropriate assessment methodologies. The objective of the thesis is to develop certain methodological elements of assessing discrimination in these systems in a pre-deployment setting. More precisely, the thesis sets out to develop considerations, principles, criteria, and methods that ought to be included in a pre-deployment discrimination assessment based on the non-discrimination principle in EU law. As a foundation for the development of these methodological elements, the thesis explores the issue of ‘bias’ in AI-CDS systems and the mechanisms through which equality-related biases may occur in these systems. | en_US |
dc.description.doctoraltype | ph.d. | en_US |
dc.description.popularabstract | There are widespread expectations that Artificial Intelligence may revolutionise healthcare. The use of AI as clinical decision support may contribute to faster, more efficient, more accessible, and more accurate medical care. However, AI technologies are often biased to the detriment of certain patient groups. Could biases in AI systems cause discrimination in healthcare? And how may one assess discrimination in an AI system before it is deployed? | en_US |
dc.identifier.isbn | 978-82-93021-46-9 | en_US |
dc.identifier.uri | https://hdl.handle.net/10037/33423 | |
dc.language.iso | eng | en_US |
dc.publisher | UiT The Arctic University of Norway | en_US |
dc.publisher | UiT Norges arktiske universitet | en_US |
dc.rights.holder | Copyright 2024 The Author(s) | |
dc.rights.uri | https://creativecommons.org/licenses/by-nc-sa/4.0 | en_US |
dc.rights | Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) | en_US |
dc.subject | artificial intelligence | en_US |
dc.subject | bias | en_US |
dc.subject | discrimination | en_US |
dc.subject | health technology | en_US |
dc.subject | EU law | en_US |
dc.title | Bias and Discrimination in Clinical Decision Support Systems Based on Artificial Intelligence | en_US |
dc.type | Doctoral thesis | en_US |
dc.type | Doktorgradsavhandling | en_US |