PLOS Global Public Health (Jan 2024)

A scientometric analysis of fairness in health AI literature.

  • Isabelle Rose I Alberto,
  • Nicole Rose I Alberto,
  • Yuksel Altinel,
  • Sarah Blacker,
  • William Warr Binotti,
  • Leo Anthony Celi,
  • Tiffany Chua,
  • Amelia Fiske,
  • Molly Griffin,
  • Gulce Karaca,
  • Nkiruka Mokolo,
  • David Kojo N Naawu,
  • Jonathan Patscheider,
  • Anton Petushkov,
  • Justin Michael Quion,
  • Charles Senteio,
  • Simon Taisbak,
  • İsmail Tırnova,
  • Harumi Tokashiki,
  • Adrian Velasquez,
  • Antonio Yaghy,
  • Keagan Yap

DOI
https://doi.org/10.1371/journal.pgph.0002513
Journal volume & issue
Vol. 4, no. 1
p. e0002513

Abstract

Read online

Artificial intelligence (AI) and machine learning are central components of today's medical environment. The fairness of AI, i.e. the ability of AI to be free from bias, has repeatedly come into question. This study investigates the diversity of members of academia whose scholarship poses questions about the fairness of AI. The articles that combine the topics of fairness, artificial intelligence, and medicine were selected from Pubmed, Google Scholar, and Embase using keywords. Eligibility and data extraction from the articles were done manually and cross-checked by another author for accuracy. Articles were selected for further analysis, cleaned, and organized in Microsoft Excel; spatial diagrams were generated using Public Tableau. Additional graphs were generated using Matplotlib and Seaborn. Linear and logistic regressions were conducted using Python to measure the relationship between funding status, number of citations, and the gender demographics of the authorship team. We identified 375 eligible publications, including research and review articles concerning AI and fairness in healthcare. Analysis of the bibliographic data revealed that there is an overrepresentation of authors that are white, male, and are from high-income countries, especially in the roles of first and last author. Additionally, analysis showed that papers whose authors are based in higher-income countries were more likely to be cited more often and published in higher impact journals. These findings highlight the lack of diversity among the authors in the AI fairness community whose work gains the largest readership, potentially compromising the very impartiality that the AI fairness community is working towards.