Acta Universitatis Sapientiae: Informatica (Oct 2024)
Exploring the Impact of Backbone Architecture on Explainable CNNs’ Interpretability
Abstract
The growing demand for interpretable models in machine learning underscores the importance of transparency in decision-making processes for building trust and ensuring accountability in AI systems. Unlike complex black-box models, interpretable models shed light on the reasoning behind predictions or classifications. In image processing, convolutional networks often serve as backbone models that – obviously, but not entirely transparently – highly influence overall performance. This research focuses on assessing and comparing the performance of explainable neural network-based image classification models using various backbone architectures. The evaluation includes various performance metrics, such as prediction accuracy and specialized measurements tailored to assess interpretability, providing insights into the effectiveness of interpretable models in image classification tasks.
Keywords