Frontiers in Big Data (May 2021)

MARGIN: Uncovering Deep Neural Networks Using Graph Signal Analysis

  • Rushil Anirudh,
  • Jayaraman J. Thiagarajan,
  • Rahul Sridhar,
  • Peer-Timo Bremer

DOI
https://doi.org/10.3389/fdata.2021.589417
Journal volume & issue
Vol. 4

Abstract

Read online

Interpretability has emerged as a crucial aspect of building trust in machine learning systems, aimed at providing insights into the working of complex neural networks that are otherwise opaque to a user. There are a plethora of existing solutions addressing various aspects of interpretability ranging from identifying prototypical samples in a dataset to explaining image predictions or explaining mis-classifications. While all of these diverse techniques address seemingly different aspects of interpretability, we hypothesize that a large family of interepretability tasks are variants of the same central problem which is identifying relative change in a model’s prediction. This paper introduces MARGIN, a simple yet general approach to address a large set of interpretability tasks MARGIN exploits ideas rooted in graph signal analysis to determine influential nodes in a graph, which are defined as those nodes that maximally describe a function defined on the graph. By carefully defining task-specific graphs and functions, we demonstrate that MARGIN outperforms existing approaches in a number of disparate interpretability challenges.

Keywords