Scientific Data (Mar 2023)

Evaluating explainability for graph neural networks

  • Chirag Agarwal,
  • Owen Queen,
  • Himabindu Lakkaraju,
  • Marinka Zitnik

DOI
https://doi.org/10.1038/s41597-023-01974-x
Journal volume & issue
Vol. 10, no. 1
pp. 1 – 18

Abstract

Read online

Abstract As explanations are increasingly used to understand the behavior of graph neural networks (GNNs), evaluating the quality and reliability of GNN explanations is crucial. However, assessing the quality of GNN explanations is challenging as existing graph datasets have no or unreliable ground-truth explanations. Here, we introduce a synthetic graph data generator, ShapeGGen, which can generate a variety of benchmark datasets (e.g., varying graph sizes, degree distributions, homophilic vs. heterophilic graphs) accompanied by ground-truth explanations. The flexibility to generate diverse synthetic datasets and corresponding ground-truth explanations allows ShapeGGen to mimic the data in various real-world areas. We include ShapeGGen and several real-world graph datasets in a graph explainability library, GraphXAI. In addition to synthetic and real-world graph datasets with ground-truth explanations, GraphXAI provides data loaders, data processing functions, visualizers, GNN model implementations, and evaluation metrics to benchmark GNN explainability methods.