Frontiers in Artificial Intelligence (Feb 2024)

Benchmarking the influence of pre-training on explanation performance in MR image classification

  • Marta Oliveira,
  • Rick Wilming,
  • Benedict Clark,
  • Céline Budding,
  • Fabian Eitel,
  • Kerstin Ritter,
  • Stefan Haufe,
  • Stefan Haufe,
  • Stefan Haufe

DOI
https://doi.org/10.3389/frai.2024.1330919
Journal volume & issue
Vol. 7

Abstract

Read online

Convolutional Neural Networks (CNNs) are frequently and successfully used in medical prediction tasks. They are often used in combination with transfer learning, leading to improved performance when training data for the task are scarce. The resulting models are highly complex and typically do not provide any insight into their predictive mechanisms, motivating the field of “explainable” artificial intelligence (XAI). However, previous studies have rarely quantitatively evaluated the “explanation performance” of XAI methods against ground-truth data, and transfer learning and its influence on objective measures of explanation performance has not been investigated. Here, we propose a benchmark dataset that allows for quantifying explanation performance in a realistic magnetic resonance imaging (MRI) classification task. We employ this benchmark to understand the influence of transfer learning on the quality of explanations. Experimental results show that popular XAI methods applied to the same underlying model differ vastly in performance, even when considering only correctly classified examples. We further observe that explanation performance strongly depends on the task used for pre-training and the number of CNN layers pre-trained. These results hold after correcting for a substantial correlation between explanation and classification performance.

Keywords