Web of Science: 13 citas, Scopus: 15 citas, Google Scholar: citas,
Bias in algorithms of AI systems developed for COVID-19 : a scoping review
Delgado, Janet (Universidad de Granada. Departamento de Filosofía)
de Manuel, Alicia (Universitat Autònoma de Barcelona. Departament de Filosofia)
Parra Jounou, Iris 1989- (Universitat Autònoma de Barcelona. Departament de Filosofia)
Moyano, Cristian (Universitat Autònoma de Barcelona. Departament de Filosofia)
Rueda, Jon (FiloLab Scientific Unit of Excellence of the University of Granada)
Guersenzvaig, Ariel (Universitat de Vic - Universitat Central de Catalunya. Elisava, Facultat de Disseny i Enginyeria)
Ausín, Txetxu (Consejo Superior de Investigaciones Científicas (Espanya). Instituto de Filosofía)
Cruz Piqueras, Maite (Escuela Andaluza de Salud Pública)
Casacuberta, David 1967- (Universitat Autònoma de Barcelona. Departament de Filosofia)
Puyol González, Àngel 1968- (Universitat Autònoma de Barcelona. Departament de Filosofia)

Fecha: 2022
Resumen: To analyze which ethically relevant biases have been identified by academic literature in artificial intelligence (AI) algorithms developed either for patient risk prediction and triage, or for contact tracing to deal with the COVID-19 pandemic. Additionally, to specifically investigate whether the role of social determinants of health (SDOH) have been considered in these AI developments or not. We conducted a scoping review of the literature, which covered publications from March 2020 to April 2021. ​Studies mentioning biases on AI algorithms developed for contact tracing and medical triage or risk prediction regarding COVID-19 were included. From 1054 identified articles, 20 studies were finally included. We propose a typology of biases identified in the literature based on bias, limitations and other ethical issues in both areas of analysis. Results on health disparities and SDOH were classified into five categories: racial disparities, biased data, socio-economic disparities, unequal accessibility and workforce, and information communication. SDOH needs to be considered in the clinical context, where they still seem underestimated. Epidemiological conditions depend on geographic location, so the use of local data in studies to develop international solutions may increase some biases. Gender bias was not specifically addressed in the articles included. The main biases are related to data collection and management. Ethical problems related to privacy, consent, and lack of regulation have been identified in contact tracing while some bias-related health inequalities have been highlighted. There is a need for further research focusing on SDOH and these specific AI apps.
Nota: Altres ajuts: acords transformatius de la UAB
Nota: Altres ajuts: Fundación BBVA
Derechos: Aquest document està subjecte a una llicència d'ús Creative Commons. Es permet la reproducció total o parcial, la distribució, la comunicació pública de l'obra i la creació d'obres derivades, fins i tot amb finalitats comercials, sempre i quan es reconegui l'autoria de l'obra original. Creative Commons
Lengua: Anglès
Documento: Article ; recerca ; Versió publicada
Materia: Artificial intelligence ; Bias ; Digital contact tracing ; COVID-19 ; Patient risk prediction
Publicado en: Journal of Bioethical Inquiry, Vol. 19 (july 2022) , p. 407-419, ISSN 1872-4353

DOI: 10.1007/s11673-022-10200-z
PMID: 35857214


13 p, 724.0 KB

El registro aparece en las colecciones:
Artículos > Artículos de investigación
Artículos > Artículos publicados

 Registro creado el 2022-09-23, última modificación el 2024-04-24



   Favorit i Compartir