IEEE Access (Jan 2025)

Evaluation of Generative AI Models in Python Code Generation: A Comparative Study

  • Dominik Palla,
  • Antonin Slaby

DOI
https://doi.org/10.1109/access.2025.3560244
Journal volume & issue
Vol. 13
pp. 65334 – 65347

Abstract

Read online

This study evaluates leading generative AI models for Python code generation. Evaluation criteria include syntax accuracy, response time, completeness, reliability, and cost. The models tested comprise OpenAI’s GPT series (GPT-4 Turbo, GPT-4o, GPT-4o Mini, GPT-3.5 Turbo), Google’s Gemini (1.0 Pro, 1.5 Flash, 1.5 Pro), Meta’s LLaMA (3.0 8B, 3.1 8B), and Anthropic’s Claude models (3.5 Sonnet, 3 Opus, 3 Sonnet, 3 Haiku). Ten coding tasks of varying complexity were tested across three iterations per model to measure performance and consistency. Claude models, especially Claude 3.5 Sonnet, achieved the highest accuracy and reliability. They outperformed all other models in both simple and complex tasks. Gemini models showed limitations in handling complex code. Cost-effective options like Claude 3 Haiku and Gemini 1.5 Flash were budget-friendly and maintained good accuracy on simpler problems. Unlike earlier single-metric studies, this work introduces a multi-dimensional evaluation framework that considers accuracy, reliability, cost, and exception handling. Future work will explore other programming languages and include metrics such as code optimization and security robustness.

Keywords