Methods in Ecology and Evolution (Oct 2024)
Harnessing large language models for coding, teaching and inclusion to empower research in ecology and evolution
Abstract
Abstract Large language models (LLMs) are a type of artificial intelligence (AI) that can perform various natural language processing tasks. The adoption of LLMs has become increasingly prominent in scientific writing and analyses because of the availability of free applications such as ChatGPT. This increased use of LLMs not only raises concerns about academic integrity but also presents opportunities for the research community. Here we focus on the opportunities for using LLMs for coding in ecology and evolution. We discuss how LLMs can be used to generate, explain, comment, translate, debug, optimise and test code. We also highlight the importance of writing effective prompts and carefully evaluating the outputs of LLMs. In addition, we draft a possible road map for using such models inclusively and with integrity. LLMs can accelerate the coding process, especially for unfamiliar tasks, and free up time for higher level tasks and creative thinking while increasing efficiency and creative output. LLMs also enhance inclusion by accommodating individuals without coding skills, with limited access to education in coding, or for whom English is not their primary written or spoken language. However, code generated by LLMs is of variable quality and has issues related to mathematics, logic, non‐reproducibility and intellectual property; it can also include mistakes and approximations, especially in novel methods. We highlight the benefits of using LLMs to teach and learn coding, and advocate for guiding students in the appropriate use of AI tools for coding. Despite the ability to assign many coding tasks to LLMs, we also reaffirm the continued importance of teaching coding skills for interpreting LLM‐generated code and to develop critical thinking skills. As editors of MEE, we support—to a limited extent—the transparent, accountable and acknowledged use of LLMs and other AI tools in publications. If LLMs or comparable AI tools (excluding commonly used aids like spell‐checkers, Grammarly and Writefull) are used to produce the work described in a manuscript, there must be a clear statement to that effect in its Methods section, and the corresponding or senior author must take responsibility for any code (or text) generated by the AI platform.
Keywords