Alexandria Engineering Journal (Dec 2024)
Design of automated model for inspecting and evaluating handwritten answer scripts: A pedagogical approach with NLP and deep learning
Abstract
We address common challenges examiners face, such as accidental question skipping, marking omissions, and potential bias in assessment. These issues often arise due to the necessity of examining scripts in separate sessions, driven by the high volume of examination materials. In response, we propose the implementation of a self-regulating examiner, harnessing contemporary technology to reduce examiner workload and mitigate the possibility of errors. This automated approach aims to ensure fairness and accuracy in evaluating response scripts, offering a promising solution to the challenges encountered by examiners in the field Our study introduces an innovative approach that seamlessly integrates technologies, including Optical Character Recognition (OCR) for text ex- traction, Natural Language Processing (NLP) for keyword analysis, and ma- chine learning for grading. The results of our method are efficiently presented through a user-friendly web application, providing a streamlined and understandable means for examiners to evaluate response scripts.