Scientific Reports (Aug 2024)

A novel approach for assessing fairness in deployed machine learning algorithms

  • Shahadat Uddin,
  • Haohui Lu,
  • Ashfaqur Rahman,
  • Junbin Gao

DOI
https://doi.org/10.1038/s41598-024-68651-w
Journal volume & issue
Vol. 14, no. 1
pp. 1 – 10

Abstract

Read online

Abstract Fairness in machine learning (ML) emerges as a critical concern as AI systems increasingly influence diverse aspects of society, from healthcare decisions to legal judgments. Many studies show evidence of unfair ML outcomes. However, the current body of literature lacks a statistically validated approach that can evaluate the fairness of a deployed ML algorithm against a dataset. A novel evaluation approach is introduced in this research based on k-fold cross-validation and statistical t-tests to assess the fairness of ML algorithms. This approach was exercised across five benchmark datasets using six classical ML algorithms. Considering four fair ML definitions guided by the current literature, our analysis showed that the same dataset generates a fair outcome for one ML algorithm but an unfair result for another. Such an observation reveals complex, context-dependent fairness issues in ML, complicated further by the varied operational mechanisms of the underlying ML models. Our proposed approach enables researchers to check whether deploying any ML algorithms against a protected attribute within datasets is fair. We also discuss the broader implications of the proposed approach, highlighting a notable variability in its fairness outcomes. Our discussion underscores the need for adaptable fairness definitions and the exploration of methods to enhance the fairness of ensemble approaches, aiming to advance fair ML practices and ensure equitable AI deployment across societal sectors.

Keywords