Big Data and Cognitive Computing (Jul 2023)

An Approach Based on Recurrent Neural Networks and Interactive Visualization to Improve Explainability in AI Systems

  • William Villegas-Ch,
  • Joselin García-Ortiz,
  • Angel Jaramillo-Alcazar

DOI
https://doi.org/10.3390/bdcc7030136
Journal volume & issue
Vol. 7, no. 3
p. 136

Abstract

Read online

This paper investigated the importance of explainability in artificial intelligence models and its application in the context of prediction in Formula (1). A step-by-step analysis was carried out, including collecting and preparing data from previous races, training an AI model to make predictions, and applying explainability techniques in the said model. Two approaches were used: the attention technique, which allowed visualizing the most relevant parts of the input data using heat maps, and the permutation importance technique, which evaluated the relative importance of features. The results revealed that feature length and qualifying performance are crucial variables for position predictions in Formula (1). These findings highlight the relevance of explainability in AI models, not only in Formula (1) but also in other fields and sectors, by ensuring fairness, transparency, and accountability in AI-based decision making. The results highlight the importance of considering explainability in AI models and provide a practical methodology for its implementation in Formula (1) and other domains.

Keywords