Computers in Human Behavior: Artificial Humans (Aug 2024)

The great AI witch hunt: Reviewers’ perception and (Mis)conception of generative AI in research writing

  • Hilda Hadan,
  • Derrick M. Wang,
  • Reza Hadi Mogavi,
  • Joseph Tu,
  • Leah Zhang-Kennedy,
  • Lennart E. Nacke

Journal volume & issue
Vol. 2, no. 2
p. 100095

Abstract

Read online

Generative AI (GenAI) use in research writing is growing fast. However, it is unclear how peer reviewers recognize or misjudge AI-augmented manuscripts. To investigate the impact of AI-augmented writing on peer reviews, we conducted a snippet-based online survey with 17 peer reviewers from top-tier HCI conferences. Our findings indicate that while AI-augmented writing improves readability, language diversity, and informativeness, it often lacks research details and reflective insights from authors. Reviewers consistently struggled to distinguish between human and AI-augmented writing but their judgements remained consistent. They noted the loss of a “human touch” and subjective expressions in AI-augmented writing. Based on our findings, we advocate for reviewer guidelines that promote impartial evaluations of submissions, regardless of any personal biases towards GenAI. The quality of the research itself should remain a priority in reviews, regardless of any preconceived notions about the tools used to create it. We emphasize that researchers must maintain their authorship and control over the writing process, even when using GenAI's assistance.

Keywords