Discover Education (Oct 2024)
AI versus human effectiveness in essay evaluation
Abstract
Abstract The evaluation of student essay corrections has become a focal point in understanding the evolving role of Artificial Intelligence (AI) in education. This study aims to assess the accuracy, efficiency, and cost-effectiveness of ChatGPT's essay correction compared to human correction, with a primary focus on identifying and rectifying grammatical errors, spelling, sentence structure, punctuation, coherence, relevance, essay structure, and clarity. The research involves collecting essays from 100 randomly selected university students, covering diverse themes, with anonymity maintained and no prior corrections by humans or AI. An analysis sheet, outlining linguistic and informational elements for evaluation, serves as a benchmark for assessing the quality of corrections made by ChatGPT and humans. The study reveals that ChatGPT excels in fundamental language mechanics, demonstrating superior performance in areas like grammar, spelling, sentence structure, relevance, and supporting evidence. However, thematic consistency remains an area where human evaluators outperform the AI. The findings emphasize the potential for a balanced approach, leveraging both human and AI strengths, for a comprehensive and effective essay correction process.
Keywords