IEEE Access (Jan 2025)

Japanese Short Answer Grading for Japanese Language Learners Using the Contextual Representation of BERT

  • Dyah Lalita Luhurkinanti,
  • Prima Dewi Purnamasari,
  • Takashi Tsunakawa,
  • Anak Agung Putri Ratna

DOI
https://doi.org/10.1109/ACCESS.2025.3532659
Journal volume & issue
Vol. 13
pp. 17195 – 17207

Abstract

Read online

The automatization of grading short answers in examinations aims to help teachers grade more efficiently and fairly. The Japanese SIMPLE-O attempts to grade Japanese language learners’ short answers using a dataset from a real examination. Bidirectional encoder representations from transformer (BERT), which has shown potential for natural language processing (NLP) tasks, is implemented to grade answers without fine-tuning due to the small amount of data. Two experiments are conducted in this study. The first experiment attempts to grade based on similarities, while the second classifies the answers as either correct or incorrect. Five BERT models are tested in the system, and two additional sentence BERT (SBERT) and RoBERTa models are tested for the similarity problem. The best Pearson’s correlation for grading with similarities is obtained with the Tohoku BERT Base. The use of hiragana-kanji conversion improves the correlation to 0.615 for BERT and 0.593 for SBERT but does not show much improvement for RoBERTa. In the binary classification experiments, all models have an accuracy above 90%, with Tohoku BERT Large having the best performance. Even without fine-tuning, BERT can be used as an embedding method to perform binary classification with high accuracy.

Keywords