Frontiers in Artificial Intelligence (Oct 2023)

Revisiting the political biases of ChatGPT

  • Sasuke Fujimoto,
  • Kazuhiro Takemoto

DOI
https://doi.org/10.3389/frai.2023.1232003
Journal volume & issue
Vol. 6

Abstract

Read online

Although ChatGPT promises wide-ranging applications, there is a concern that it is politically biased; in particular, that it has a left-libertarian orientation. Nevertheless, following recent trends in attempts to reduce such biases, this study re-evaluated the political biases of ChatGPT using political orientation tests and the application programming interface. The effects of the languages used in the system as well as gender and race settings were evaluated. The results indicate that ChatGPT manifests less political bias than previously assumed; however, they did not entirely dismiss the political bias. The languages used in the system, and the gender and race settings may induce political biases. These findings enhance our understanding of the political biases of ChatGPT and may be useful for bias evaluation and designing the operational strategy of ChatGPT.

Keywords