No Cover Image

Journal article 78 views 13 downloads

A Survey of the Application of Explainable Artificial Intelligence in Biomedical Informatics

Hassan Eshkiki Orcid Logo, Farinaz Tanhaei Orcid Logo, Fabio Caraffini Orcid Logo, Benjamin Mora Orcid Logo

Applied Sciences, Volume: 15, Issue: 24, Start page: 12934

Swansea University Authors: Hassan Eshkiki Orcid Logo, Fabio Caraffini Orcid Logo, Benjamin Mora Orcid Logo

  • 71126.VoR.pdf

    PDF | Version of Record

    © 2025 by the authors. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.

    Download (3.3MB)

Check full text

DOI (Published version): 10.3390/app152412934

Abstract

This review investigates the application of Explainable Artificial Intelligence (XAI) in biomedical informatics, encompassing domains such as medical imaging, genomics, and electronic health records. Through a systematic analysis of 43 peer-reviewed articles, we examine current trends, as well as th...

Full description

Published in: Applied Sciences
ISSN: 2076-3417
Published: MDPI AG 2025
Online Access: Check full text

URI: https://cronfa.swan.ac.uk/Record/cronfa71126
Abstract: This review investigates the application of Explainable Artificial Intelligence (XAI) in biomedical informatics, encompassing domains such as medical imaging, genomics, and electronic health records. Through a systematic analysis of 43 peer-reviewed articles, we examine current trends, as well as the strengths and limitations of methodologies currently used in real-world healthcare settings. Our findings highlight a growing interest in XAI, particularly in medical imaging, yet reveal persistent challenges in clinical adoption, including issues of trust, interpretability, and integration into decision-making workflows. We identify critical gaps in existing approaches and underscore the need for more robust, human-centred, and intrinsically interpretable models, with only 44% of the papers studied proposing human-centred validations. Furthermore, we argue that fairness and accountability, which are key to the acceptance of AI in clinical practice, can be supported by the use of post hoc tools for identifying potential biases but ultimately require the implementation of complementary fairness-aware or causal approaches alongside evaluation frameworks that prioritise clinical relevance and user trust. This review provides a foundation for advancing XAI research on the development of more transparent, equitable, and clinically meaningful AI systems for use in healthcare.
Keywords: SHAP; LIME; AI; Explainable Artificial Intelligence; XAI; medical imaging; model interpretability; human-centred AI; biomedical informatics; post hoc explanations
College: Faculty of Science and Engineering
Funders: This research received no external funding.
Issue: 24
Start Page: 12934