Pregled bibliografske jedinice broj: 1276653
Empowering trust by embedding fairness in medical AI algorithm
Empowering trust by embedding fairness in medical AI algorithm // Ethical Challenges of Artificial Intelligence and New Technologies
Ljubljana, Slovenija, 2022. str. - (predavanje, nije recenziran, sažetak, ostalo)
CROSBI ID: 1276653 Za ispravke kontaktirajte CROSBI podršku putem web obrasca
Naslov
Empowering trust by embedding fairness in medical AI
algorithm
Autori
Poslon, Luka
Vrsta, podvrsta i kategorija rada
Sažeci sa skupova, sažetak, ostalo
Skup
Ethical Challenges of Artificial Intelligence and New Technologies
Mjesto i datum
Ljubljana, Slovenija, 25.10.2022
Vrsta sudjelovanja
Predavanje
Vrsta recenzije
Nije recenziran
Ključne riječi
bez ključnih riječi
Sažetak
In this contribution, we will discuss the ethical implications of unfair medical AI algorithms and attempt to explain how fairness and equity should lead us to increased trust in medical AI. Although the concept of an unfair algorithm may seem like a contradiction, we are aware of examples of many discriminatory cases based on race and ethnicity in medical AI. Algorithmic fairness is an area at the intersection of machine learning and ethics with the main goal of embedding fairness and equity into the algorithm and evaluating existing biases in data. Algorithmic fairness is one of the many challenges of the research topic dedicated to investigating the application of algorithms during the patient monitoring process. This algorithm aims to predict the exact time that will prompt the medical team to respond quickly to patients at high risk of further deterioration who need to be transferred to the ICU within six hours. Such an algorithm might inadvertently result in harm to some specific group, for example, Asians, if the training database does not include a sufficient number of Asians, the algorithm could be inaccurate for them and have a lower sensitivity for a specific group of Asians. In addition, a lower positive predictive value for Asians could harm them, resulting in a higher number of false positives. Eventually, physicians will begin to ignore warning signs that patients are being transferred to the ICU, which could lead to fatal health outcomes. Although the existence of algorithmic biases is undeniable, we need to work toward fair medical AI so that physicians and patients can regain trust in medical AI. Such algorithmic solutions should not only be developed to make recommendations and predictions based on fairness and ethical values but also include procedures to reduce bias in the data.
Izvorni jezik
Engleski