Acta Universitatis Sapientiae: Informatica (Oct 2024)

Exploring the Impact of Backbone Architecture on Explainable CNNs’ Interpretability

  • Ábel Portik,
  • Adél Bajcsi,
  • Annamária Szenkovits,
  • Zalán Bodó

DOI
https://doi.org/10.47745/ausi-2024-0007
Journal volume & issue
Vol. 16, no. 1
pp. 105 – 123

Abstract

Read online

The growing demand for interpretable models in machine learning underscores the importance of transparency in decision-making processes for building trust and ensuring accountability in AI systems. Unlike complex black-box models, interpretable models shed light on the reasoning behind predictions or classifications. In image processing, convolutional networks often serve as backbone models that – obviously, but not entirely transparently – highly influence overall performance. This research focuses on assessing and comparing the performance of explainable neural network-based image classification models using various backbone architectures. The evaluation includes various performance metrics, such as prediction accuracy and specialized measurements tailored to assess interpretability, providing insights into the effectiveness of interpretable models in image classification tasks.

Keywords