Proceedings of the XXth Conference of Open Innovations Association FRUCT (Nov 2024)

Ethical AI with Balancing Bias Mitigation and Fairness in Machine Learning Models

  • Dmytro Chornomordenko,
  • Khalida Walid Nathim,
  • Nada Abdulkareem Hameed,
  • Saja Abdulfattah Salih,
  • Nada Adnan Taher,
  • Hayder Mahmood Salman

DOI
https://doi.org/10.23919/FRUCT64283.2024.10749873
Journal volume & issue
Vol. 36, no. 1
pp. 797 – 807

Abstract

Read online

The rapid integration of Artificial Intelligence (AI) into critical domains such as healthcare, finance, and criminal justice has raised significant ethical concerns, particularly around bias and fairness in machine learning models. Despite their potential for improving decision-making processes, these models can perpetuate or even exacerbate existing societal biases. This study aims to investigate approaches to bias mitigation in AI systems, focusing on balancing fairness and performance. A systematic review of 150 research articles published between 2018 and 2023 was conducted, along with experiments on 25 benchmark datasets to evaluate various machine learning algorithms and bias mitigation techniques. Results showed a 23% reduction in bias and an average 17% improvement in nine fairness metrics during model training, though at the cost of up to 9% in overall accuracy. The study highlights the trade-offs between fairness and performance, suggesting that creating AI systems that are both fair and effective remains an ongoing challenge. The findings underscore the need for adaptive frameworks that address bias without significantly compromising model performance. Future research should explore domain-specific adaptations and scalable solutions for integrating fairness throughout the AI development process to ensure more equitable outcomes.

Keywords