Scientific Reports (Mar 2025)

Interactive exploration of CNN interpretability via coalitional game theory

  • Lei Yang,
  • Lingmeng Lu,
  • Chao Liu,
  • Jian Zhang,
  • Kehua Guo,
  • Ning Zhang,
  • Fangfang Zhou,
  • Ying Zhao

DOI
https://doi.org/10.1038/s41598-025-94052-8
Journal volume & issue
Vol. 15, no. 1
pp. 1 – 16

Abstract

Read online

Abstract Convolutional neural network (CNN) has been widely used in image classification tasks. Neuron feature visualization techniques can generate intuitive images to depict features extracted by neurons, helping users to interpret the working mechanism of a CNN. However, a CNN model commonly has numerous neurons. Manually reviewing all neurons’ feature visualizations is exhaustive, thereby causing low efficiency in CNN interpretability exploration. Inspired by SHapley Additive exPlanation (SHAP) method in Coalitional Game Theory, a quantified metric called Neuron Interpretive Metric (NeuronIM) is proposed to assess the feature expression ability of a neuron feature visualization by calculating the similarity between the feature visualization and SHAP image of the neuron. Thus, users can rapidly identify important neurons in CNN interpretability exploration. A metric called layer interpretive metric (LayerIM) and two interactive interfaces are proposed based on NeuronIM and LayerIM. The LayerIM can assess the interpretability of a convolution layer by averaging the NeuronIM values of all neurons in the layer. The interactive interfaces can display diverse explanatory information in multiple views and provide users with rich interactions to efficiently accomplish interpretability exploration tasks. A model pruning experiment and use cases were conducted to demonstrate the effectiveness of the proposed metrics and interfaces.

Keywords