PLoS Computational Biology (Sep 2023)

Evaluating a large language model's ability to solve programming exercises from an introductory bioinformatics course.

  • Stephen R Piccolo,
  • Paul Denny,
  • Andrew Luxton-Reilly,
  • Samuel H Payne,
  • Perry G Ridge

DOI
https://doi.org/10.1371/journal.pcbi.1011511
Journal volume & issue
Vol. 19, no. 9
p. e1011511

Abstract

Read online

Computer programming is a fundamental tool for life scientists, allowing them to carry out essential research tasks. However, despite various educational efforts, learning to write code can be a challenging endeavor for students and researchers in life-sciences disciplines. Recent advances in artificial intelligence have made it possible to translate human-language prompts to functional code, raising questions about whether these technologies can aid (or replace) life scientists' efforts to write code. Using 184 programming exercises from an introductory-bioinformatics course, we evaluated the extent to which one such tool-OpenAI's ChatGPT-could successfully complete programming tasks. ChatGPT solved 139 (75.5%) of the exercises on its first attempt. For the remaining exercises, we provided natural-language feedback to the model, prompting it to try different approaches. Within 7 or fewer attempts, ChatGPT solved 179 (97.3%) of the exercises. These findings have implications for life-sciences education and research. Instructors may need to adapt their pedagogical approaches and assessment techniques to account for these new capabilities that are available to the general public. For some programming tasks, researchers may be able to work in collaboration with machine-learning models to produce functional code.