ICTACT Journal on Soft Computing (Oct 2022)

COMPARATIVE STUDY OF XAI USING FORMAL CONCEPT LATTICE AND LIME

  • Bhaskaran Venkatsubramaniam,
  • Pallav Kumar Baruah

DOI
https://doi.org/10.21917/ijsc.2022.0396
Journal volume & issue
Vol. 13, no. 1
pp. 2782 – 2791

Abstract

Read online

Local Interpretable Model Agnostic Explanation (LIME) is a technique to explain a black box machine learning model using a surrogate model approach. While this technique is very popular, inherent to its approach, explanations are generated from the surrogate model and not directly from the black box model. In sensitive domains like healthcare, this need not be acceptable as trustworthy. These techniques also assume that features are independent and provide feature weights of the surrogate linear model as feature importance. In real life datasets, features may be dependent and a combination of a set of features with their specific values can be the deciding factor rather than individual feature importance. They also generate random instances around the point of interest to fit the surrogate model. These random instances need not be part of the original source or may even turn out to be meaningless. In this work, we compare LIME to explanations from the formal concept lattice. This does not use a surrogate model but a deterministic approach by generating synthetic data that respects implications in the original dataset and not randomly generating it. It obtains crucial feature combinations with their values as decision factors without presuming dependence or independence of features. Its explanations not only cover the point of interest but also global explanation of the model, similar and contrastive examples around the point of interest. The explanations are textual and hence easier to comprehend than comprehending weights of a surrogate linear model to understand the black box model.

Keywords