IEEE Access (Jan 2022)

Explainable Automatic Industrial Carbon Footprint Estimation From Bank Transaction Classification Using Natural Language Processing

  • Jaime Gonzalez-Gonzalez,
  • Silvia Garcia-Mendez,
  • Francisco De Arriba-Perez,
  • Francisco J. Gonzalez-Castano,
  • Oscar Barba-Seara

DOI
https://doi.org/10.1109/ACCESS.2022.3226324
Journal volume & issue
Vol. 10
pp. 126326 – 126338

Abstract

Read online

Concerns about the effect of greenhouse gases have motivated the development of certification protocols to quantify the industrial carbon footprint (cf). These protocols are manual, work-intensive, and expensive. All of the above have led to a shift towards automatic data-driven approaches to estimate the cf, including Machine Learning (ml) solutions. Unfortunately, as in other sectors of interest, the decision-making processes involved in these solutions lack transparency from the end user’s point of view, who must blindly trust their outcomes compared to intelligible traditional manual approaches. In this research, manual and automatic methodologies for cf estimation were reviewed, taking into account their transparency limitations. This analysis led to the proposal of a new explainable ml solution for automatic cf calculations through bank transaction classification. Consideration should be given to the fact that no previous research has considered the explainability of bank transaction classification for this purpose. For classification, different ml models have been employed based on their promising performance in similar problems in the literature, such as Support Vector Machine, Random Forest, and Recursive Neural Networks. The results obtained were in the 90 % range for accuracy, precision, and recall evaluation metrics. From their decision paths, the proposed solution estimates the co2 emissions associated with bank transactions. The explainability methodology is based on an agnostic evaluation of the influence of the input terms extracted from the descriptions of transactions using locally interpretable models. The explainability terms were automatically validated using a similarity metric over the descriptions of the target categories. Conclusively, the explanation performance is satisfactory in terms of the proximity of the explanations to the associated activity sector descriptions, endorsing the trustworthiness of the process for a human operator and end users.

Keywords