Science Editing (Aug 2024)

Research ethics and issues regarding the use of ChatGPT-like artificial intelligence platforms by authors and reviewers: a narrative review

  • Sang-Jun Kim

DOI
https://doi.org/10.6087/kcse.343
Journal volume & issue
Vol. 11, no. 2
pp. 96 – 106

Abstract

Read online

While generative artificial intelligence (AI) technology has become increasingly competitive since OpenAI introduced ChatGPT, its widespread use poses significant ethical challenges in research. Excessive reliance on tools like ChatGPT may intensify ethical concerns in scholarly articles. Therefore, this article aims to provide a comprehensive narrative review of the ethical issues associated with using AI in academic writing and to inform researchers of current trends. Our methodology involved a detailed examination of literature on ChatGPT and related research trends. We conducted searches in major databases to identify additional relevant articles and cited literature, from which we collected and analyzed papers. We identified major issues from the literature, categorized into problems faced by authors using nonacademic AI platforms in writing and challenges related to the detection and acceptance of AI-generated content by reviewers and editors. We explored eight specific ethical problems highlighted by authors and reviewers and conducted a thorough review of five key topics in research ethics. Given that nonacademic AI platforms like ChatGPT often do not disclose their training data sources, there is a substantial risk of unattributed content and plagiarism. Therefore, researchers must verify the accuracy and authenticity of AI-generated content before incorporating it into their article, ensuring adherence to principles of research integrity and ethics, including avoidance of fabrication, falsification, and plagiarism.

Keywords