AI Open (Jan 2023)

Learning fair representations via an adversarial framework

  • Huadong Qiu,
  • Rui Feng,
  • Ruoyun Hu,
  • Xiao Yang,
  • Shaowa Lin,
  • Quanjin Tao,
  • Yang Yang

Journal volume & issue
Vol. 4
pp. 91 – 97

Abstract

Read online

Fairness has become a central issue for our research community as classification algorithms are adopted in societally critical domains such as recidivism prediction and loan approval. In this work, we consider the potential bias based on protected attributes (e.g., race and gender), and tackle this problem by learning latent representations of individuals that are statistically indistinguishable between protected groups while sufficiently preserving other information for classification. To do that, we develop a minimax adversarial framework with a generator to capture the data distribution and generate latent representations, and a critic to ensure that the distributions across different protected groups are similar. Our framework provides theoretical guarantee with respect statistical parity and individual fairness. Empirical results on four real-world datasets also show that the learned representation can effectively be used for classification tasks such as credit risk prediction while obstructing information related to protected groups, especially when removing protected attributes is not sufficient for fair classification.

Keywords