Journal of Hematology & Oncology (May 2024)

ChatGPT’s ability to generate realistic experimental images poses a new challenge to academic integrity

  • Lingxuan Zhu,
  • Yancheng Lai,
  • Weiming Mou,
  • Haoran Zhang,
  • Anqi Lin,
  • Chang Qi,
  • Tao Yang,
  • Liling Xu,
  • Jian Zhang,
  • Peng Luo

DOI
https://doi.org/10.1186/s13045-024-01543-8
Journal volume & issue
Vol. 17, no. 1
pp. 1 – 3

Abstract

Read online

Abstract The rapid advancements in large language models (LLMs) such as ChatGPT have raised concerns about their potential impact on academic integrity. While initial concerns focused on ChatGPT’s writing capabilities, recent updates have integrated DALL-E 3’s image generation features, extending the risks to visual evidence in biomedical research. Our tests revealed ChatGPT’s nearly barrier-free image generation feature can be used to generate experimental result images, such as blood smears, Western Blot, immunofluorescence and so on. Although the current ability of ChatGPT to generate experimental images is limited, the risk of misuse is evident. This development underscores the need for immediate action. We suggest that AI providers restrict the generation of experimental image, develop tools to detect AI-generated images, and consider adding “invisible watermarks” to the generated images. By implementing these measures, we can better ensure the responsible use of AI technology in academic research and maintain the integrity of scientific evidence.

Keywords