International Journal of Information Management Data Insights (Nov 2024)

A sociotechnical perspective for explicit unfairness mitigation techniques for algorithm fairness

  • Nimisha Singh,
  • Amita Kapoor,
  • Neha Soni

Journal volume & issue
Vol. 4, no. 2
p. 100259

Abstract

Read online

With the increasing use of artificial intelligence (AI) applications in decision making, there are heightened concerns about the fairness of such decisions. Initiatives like Responsible AI, Fair ML, Ethics in AI have provided guidelines for developing AI as an attempt to address these challenges. These approaches have been criticized for taking a top down approach by applying abstract principles to practice without taking into account the context and particularities of the algorithm development. Using the sociotechnical lens, we propose a framework for developing Fair algorithm. We apply this framework to mitigate unfairness in three distinct datasets: COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), Crimes and Community, and a synthetic dataset. Our methodology involves nonconvex optimization for regression with fairness constraints. The experimentation examines the correlation coefficient, Area Under the Curve (AUC), and Root Mean Square Error (RMSE) in relation to a fairness parameter, epsilon. Our findings suggest three objectively testable propositions namely, 1) Fairness Constraint and Predictive power, 2) Fairness Constraints and Discriminatory Ability, 3) Fairness Constraints and Prediction Accuracy.

Keywords