Scientific Reports (Aug 2025)
Integrating CNN and transformer architectures for superior Arabic printed and handwriting characters classification
Abstract
Abstract Optical Character Recognition (OCR) systems play a crucial role in converting printed Arabic text into digital formats, enabling various applications such as education and digital archiving. However, the complex characteristics of the Arabic script, including its cursive nature, diacritical marks, handwriting, and ligatures, present significant challenges for accurate character recognition. This study proposes a hybrid transformer encoder-based model for Arabic printed and handwritten character classification. The methodology integrates transfer learning techniques utilizing pre-trained VGG16 and ResNet50 models for feature extraction, followed by a feature ensemble process. The transformer encoder architecture leverages its self-attention mechanism and multilayer perceptron (MLP) components to capture global dependencies and refine feature representations. The training and evaluation were conducted on the Arabic OCR and Arabic Handwritten Character Recognition (AHCR) datasets, achieving exceptional results with an accuracy of 99.51% and 98.19%, respectively. The proposed model is evaluated in an extension ablation study using the Arabic Char 4k OCR dataset for training, while testing on the part of the AHCR dataset to evaluate performance on unseen data. The proposed model significantly outperforms individual CNN-based models and ensemble techniques, demonstrating its robustness and efficiency for Arabic character classification. This research establishes a foundation for improved OCR systems, offering a reliable solution for real-world Arabic text recognition tasks.
Keywords