A scoping review of interpretability and explainability concerning artificial intelligence methods in medical imaging.

Fiche du document

Type de document
Périmètre
Langue
Identifiants
Relations

Ce document est lié à :
info:eu-repo/semantics/altIdentifier/doi/10.1016/j.ejrad.2023.111159

Ce document est lié à :
info:eu-repo/semantics/altIdentifier/pmid/37976760

Ce document est lié à :
info:eu-repo/semantics/altIdentifier/eissn/1872-7727

Ce document est lié à :
info:eu-repo/semantics/altIdentifier/urn/urn:nbn:ch:serval-BIB_142C291FAF8F2

Licences

info:eu-repo/semantics/openAccess , CC BY 4.0 , https://creativecommons.org/licenses/by/4.0/



Sujets proches En

Cerebrum Mind

Citer ce document

M. Champendal et al., « A scoping review of interpretability and explainability concerning artificial intelligence methods in medical imaging. », Serveur académique Lausannois, ID : 10.1016/j.ejrad.2023.111159


Métriques


Partage / Export

Résumé 0

To review eXplainable Artificial Intelligence/(XAI) methods available for medical imaging/(MI). A scoping review was conducted following the Joanna Briggs Institute's methodology. The search was performed on Pubmed, Embase, Cinhal, Web of Science, BioRxiv, MedRxiv, and Google Scholar. Studies published in French and English after 2017 were included. Keyword combinations and descriptors related to explainability, and MI modalities were employed. Two independent reviewers screened abstracts, titles and full text, resolving differences through discussion. 228 studies met the criteria. XAI publications are increasing, targeting MRI (n = 73), radiography (n = 47), CT (n = 46). Lung (n = 82) and brain (n = 74) pathologies, Covid-19 (n = 48), Alzheimer's disease (n = 25), brain tumors (n = 15) are the main pathologies explained. Explanations are presented visually (n = 186), numerically (n = 67), rule-based (n = 11), textually (n = 11), and example-based (n = 6). Commonly explained tasks include classification (n = 89), prediction (n = 47), diagnosis (n = 39), detection (n = 29), segmentation (n = 13), and image quality improvement (n = 6). The most frequently provided explanations were local (78.1 %), 5.7 % were global, and 16.2 % combined both local and global approaches. Post-hoc approaches were predominantly employed. The used terminology varied, sometimes indistinctively using explainable (n = 207), interpretable (n = 187), understandable (n = 112), transparent (n = 61), reliable (n = 31), and intelligible (n = 3). The number of XAI publications in medical imaging is increasing, primarily focusing on applying XAI techniques to MRI, CT, and radiography for classifying and predicting lung and brain pathologies. Visual and numerical output formats are predominantly used. Terminology standardisation remains a challenge, as terms like "explainable" and "interpretable" are sometimes being used indistinctively. Future XAI development should consider user needs and perspectives.

document thumbnail

Par les mêmes auteurs

Sur les mêmes sujets

Sur les mêmes disciplines

Exporter en