Journal of Information Systems and Informatics (Sep 2024)

An Experimental Study of The Efficacy of Prompting Strategies In Guiding ChatGPT for A Computer Programming Task

  • Nompilo Makhosi Mnguni,
  • Nkululeko Nkomo,
  • Kudakwashe Maguraushe,
  • Murimo Bethel Mutanga

DOI
https://doi.org/10.51519/journalisi.v6i3.783
Journal volume & issue
Vol. 6, no. 3
pp. 1346 – 1359

Abstract

Read online

In the rapidly advancing artificial intelligence (AI) era, optimizing language models such as Chatbot Generative Pretrained Transformer (ChatGPT) for specialised tasks like computer programming remains a mystery. There are numerous inconsistencies in the quality and correctness of code generated by ChatGPT in programming. This study aims to analyse how the different prompting strategies; text-to-code and code-to-code, impact the output of ChatGPT's responses in programming tasks. The study adopted an experimental design that presented ChatGPT with a diverse set of programming tasks and prompts, spanning various programming languages, difficulty levels, and problem domains. The generated outputs were rigorously tested and evaluated for accuracy, latency, and qualitative aspects. The findings indicated that code-to-code prompting significantly improved accuracy, achieving a 93.55% success rate compared to 29.03% for text-to-code. Code-to-code prompts were particularly effective across all difficulty levels, while text-to-code struggled, especially with harder tasks. Based on these findings, computer programming students need to appreciate and comprehend that ChatGPT prompting is essential for getting the desired output. Using optimised prompting methods, students can achieve more accurate and efficient code generation, enhancing the quality of their code. Future research should explore the balance between prompt specificity and code efficiency, investigate additional prompting strategies, and develop best practices for prompt design to optimize the use of AI in software development.

Keywords