Proceedings of the XXth Conference of Open Innovations Association FRUCT (Nov 2024)

Developing Interpretable Models for Complex Decision-Making

  • Volodymyr Mikhav,
  • Mohammed Ahmed Shakir,
  • Haithem Kareem Abass,
  • Ola Farooq Jelwy,
  • Husam Najm Abbood Al-Bayati,
  • Salman Mahmood Salman,
  • Nataliia Bodnar

DOI
https://doi.org/10.23919/FRUCT64283.2024.10749922
Journal volume & issue
Vol. 36, no. 1
pp. 66 – 75

Abstract

Read online

Background: Due to the rising complexity of decision-making processes, there is a growing need for machine-learning models that can be easily understood. Ensuring the ethical use of AI in different fields requires establishing trust and understanding in the results generated by models. Objective: This article explores tactics and approaches that help improve the clarity and comprehension of intricate decision-making models. Our study aims to investigate methods that enable end users, domain experts, and decision analysts to understand and trust the results produced by these models. Methodology: This article examines the inherent difficulties in adequately managing the trade-off between model interpretability and complexity. We recommend determining the most favorable point of balance in this context. The study investigates many approaches, such as formulating novel algorithms that prioritize the importance of characteristics, providing instructive explanations, and building transparent model architectures. Results: The practical significance and promise of interpretable models are shown via specific examples and applications in finance, healthcare, autonomous systems, and healthcare. The examples above demonstrate how interpretability promotes assurance, enables ethical consideration, and reduces possible biases within complex models. Conclusion: Developing understandable models for complex decision-making processes goes beyond academic endeavors. It leads to the ethical and appropriate deployment of AI. This publication is an excellent resource for academics, practitioners, and policymakers engaged in creating, assessing, and implementing interpretable models. The purpose of developing these models is to facilitate dependable and transparent decision-making in many fields.

Keywords