IEEE Access (Jan 2023)

Generating Real-Time Explanations for GNNs via Multiple Specialty Learners and Online Knowledge Distillation

  • Tien-Cuong Bui,
  • Van-Duc Le,
  • Wen-Syan Li

DOI
https://doi.org/10.1109/ACCESS.2023.3270385
Journal volume & issue
Vol. 11
pp. 40790 – 40808

Abstract

Read online

Graph Neural Networks have become increasingly ubiquitous in numerous applications, necessitating explanations of their predictions. However, explaining GNNs is challenging due to the complexity of graph data and model execution. Post-hoc explanation approaches have gained popularity due to their versatility, despite their additional computational costs. Although intrinsically interpretable models can provide instant explanations, they are usually model-specific and can only explain particular GNNs. To address these challenges, we propose a novel, general, and fast GNN explanation framework named SCALE. SCALE trains multiple specialty learners to explain GNNs, as creating a single powerful explainer for examining the attributions of interactions in input graphs is complicated. In training, a black-box GNN model guides learners based on an online knowledge distillation paradigm. During the explanation phase, explanations of predictions are generated by multiple explainers corresponding to trained learners. Edge masking and random walk with restart procedures are implemented to provide structural explanations for graph-level and node-level predictions. A feature attribution module provides overall summaries and instance-level feature contributions. We compare SCALE with state-of-the-art baselines through extensive experiments to demonstrate its explanation correctness and execution performance. Furthermore, we conduct a user study and a series of ablation studies to understand its strengths and weaknesses.

Keywords