IEEE Access (Jan 2023)
Automatic Arabic Grading System for Short Answer Questions
Abstract
The era of technology and digitalization has been advantageous to the educational sector. The examination system is one of the most important educational pillars that have been affected. As automatic exam grading is a revolution in the history of exam development, and therefore the automatic grading system has started to replace the traditional assessment system. The automatic grading system allows the examiners to automatically assign grades for students’ answers compared to the model answers. And, generate results based on the examiners’ answers. In this paper, we especially address the short answer questions. Most research has been done on the English language. On the other side, few research works have been conducted on Arabic. Moreover, Arabic is considered one of the rare resource languages. This paper is aimed to build an Automatic Arabic Short Answer Grading (AASAG) model using semantic similarity approaches. It is used to measure the semantic similarity between the student and model answer. The proposed model is applied to one of the Arabic scarce publicly available datasets which is called (AR-ASAG). It contains 2133 pairs of models and student answers in several versions such as txt, xml, and db. The efficiency of the proposed model was evaluated through two conducted experiments using two weighting schemas local, and hybrid local and global weighting schema. The developed approach with hybrid local and global weight-based LSA achieved better results than using local weight-based LSA with (82.82%) as F1-score value, and 0.798 as an RMSE (Root-Mean-Square Error) value using hybrid local and global weight-based LSA.
Keywords