E3S Web of Conferences (Jan 2023)

Exploring Explainable Artificial Intelligence for Transparent Decision Making

  • Praveenraj D. David Winster,
  • Victor Melvin,
  • Vennila C.,
  • Hussein Alawadi Ahmed,
  • Diyora Pardaeva,
  • Vasudevan N.,
  • Avudaiappan T.

DOI
https://doi.org/10.1051/e3sconf/202339904030
Journal volume & issue
Vol. 399
p. 04030

Abstract

Read online

Artificial intelligence (AI) has become a potent tool in many fields, allowing complicated tasks to be completed with astounding effectiveness. However, as AI systems get more complex, worries about their interpretability and transparency have become increasingly prominent. It is now more important than ever to use Explainable Artificial Intelligence (XAI) methodologies in decision-making processes, where the capacity to comprehend and trust AI-based judgments is crucial. This abstract explores the idea of XAI and how important it is for promoting transparent decision-making. Finally, the development of Explainable Artificial Intelligence (XAI) has shown to be crucial for promoting clear decision-making in AI systems. XAI approaches close the cognitive gap between complicated algorithms and human comprehension by empowering users to comprehend and analyze the inner workings of AI models. XAI equips stakeholders to evaluate and trust AI systems, assuring fairness, accountability, and ethical standards in fields like healthcare and finance where AI-based choices have substantial ramifications. The development of XAI is essential for attaining AI's full potential while retaining transparency and human-centric decision making, despite ongoing hurdles.

Keywords