Human-Centric Intelligent Systems (Jan 2024)

A Local Explainability Technique for Graph Neural Topic Models

  • Bharathwajan Rajendran,
  • Chandran G. Vidya,
  • J. Sanil,
  • S. Asharaf

DOI
https://doi.org/10.1007/s44230-023-00058-8
Journal volume & issue
Vol. 4, no. 1
pp. 53 – 76

Abstract

Read online

Abstract Topic modelling is a Natural Language Processing (NLP) technique that has gained popularity in the recent past. It identifies word co-occurrence patterns inside a document corpus to reveal hidden topics. Graph Neural Topic Model (GNTM) is a topic modelling technique that uses Graph Neural Networks (GNNs) to learn document representations effectively. It provides high-precision documents-topics and topics-words probability distributions. Such models find immense application in many sectors, including healthcare, financial services, and safety-critical systems like autonomous cars. This model is not explainable. As a matter of fact, the user cannot comprehend the underlying decision-making process. The paper introduces a technique to explain the documents-topics probability distributions output of GNTM. The explanation is achieved by building a local explainable model such as a probabilistic Naïve Bayes classifier. The experimental results using various benchmark NLP datasets show a fidelity of 88.39% between the predictions of GNTM and the local explainable model. This similarity implies that the proposed technique can effectively explain the documents-topics probability distribution output of GNTM.

Keywords