Essential properties and explanation effectiveness of explainable artificial intelligence in healthcare: A systematic review

Jinsun Jung, Hyungbok Lee, Hyunggu Jung, Hyeoneui Kim

Research output: Contribution to journalReview articlepeer-review

13 Scopus citations

Abstract

Background: Significant advancements in the field of information technology have influenced the creation of trustworthy explainable artificial intelligence (XAI) in healthcare. Despite improved performance of XAI, XAI techniques have not yet been integrated into real-time patient care. Objective: The aim of this systematic review is to understand the trends and gaps in research on XAI through an assessment of the essential properties of XAI and an evaluation of explanation effectiveness in the healthcare field. Methods: A search of PubMed and Embase databases for relevant peer-reviewed articles on development of an XAI model using clinical data and evaluating explanation effectiveness published between January 1, 2011, and April 30, 2022, was conducted. All retrieved papers were screened independently by the two authors. Relevant papers were also reviewed for identification of the essential properties of XAI (e.g., stakeholders and objectives of XAI, quality of personalized explanations) and the measures of explanation effectiveness (e.g., mental model, user satisfaction, trust assessment, task performance, and correctability). Results: Six out of 882 articles met the criteria for eligibility. Artificial Intelligence (AI) users were the most frequently described stakeholders. XAI served various purposes, including evaluation, justification, improvement, and learning from AI. Evaluation of the quality of personalized explanations was based on fidelity, explanatory power, interpretability, and plausibility. User satisfaction was the most frequently used measure of explanation effectiveness, followed by trust assessment, correctability, and task performance. The methods of assessing these measures also varied. Conclusion: XAI research should address the lack of a comprehensive and agreed-upon framework for explaining XAI and standardized approaches for evaluating the effectiveness of the explanation that XAI provides to diverse AI stakeholders.

Original languageEnglish
Article numbere16110
JournalHeliyon
Volume9
Issue number5
DOIs
StatePublished - May 2023

Keywords

  • Essential properties of XAI
  • Evaluation of explanation effectiveness
  • Explainable artificial intelligence
  • Interpretable artificial intelligence

Fingerprint

Dive into the research topics of 'Essential properties and explanation effectiveness of explainable artificial intelligence in healthcare: A systematic review'. Together they form a unique fingerprint.

Cite this