IEEE Access (Jan 2024)

Ontology-Based Neuro-Symbolic AI: Effects on Prediction Quality and Explainability

  • Alexander Smirnov,
  • Andrew Ponomarev,
  • Anton Agafonov

DOI
https://doi.org/10.1109/ACCESS.2024.3485185
Journal volume & issue
Vol. 12
pp. 156609 – 156626

Abstract

Read online

Artificial intelligence (AI) systems, based on neural networks are becoming ubiquitous. However, in many cases, their application is constrained by the lack of interpretability. One of the ways of overcoming this limitation is neuro-symbolic approaches, where the neural network is complemented by some symbolic structures, expressing existing knowledge and making it understandable to humans. Such approaches can make predictions more understandable and interpretable (due to the connection to human-understandable symbolic structures). Still, they can also have an additional advantage by allowing models to be trained with less data (due to leveraging prior knowledge). This paper focuses on a subset of neuro-symbolic approaches, where domain ontologies play the role of symbolic structures. The paper discusses the existing methods to build ontology-aware explainable neural networks and ways to leverage ontologies in forming the explanation. Based on the analysis, it proposes a computational framework for building ontology-aware self-explaining neural networks. The proposed framework allows several specializations, and it is shown that these specializations allow one to improve the prediction quality. Finally, the paper presents the results of a user study, showcasing that ontology-based explanations can improve the understandability of an AI model and the efficiency of human-AI interaction.

Keywords