PeerJ Computer Science (Apr 2024)

Adapting multilingual vision language transformers for low-resource Urdu optical character recognition (OCR)

  • Musa Dildar Ahmed Cheema,
  • Mohammad Daniyal Shaiq,
  • Farhaan Mirza,
  • Ali Kamal,
  • M. Asif Naeem

DOI
https://doi.org/10.7717/peerj-cs.1964
Journal volume & issue
Vol. 10
p. e1964

Abstract

Read online Read online

In the realm of digitizing written content, the challenges posed by low-resource languages are noteworthy. These languages, often lacking in comprehensive linguistic resources, require specialized attention to develop robust systems for accurate optical character recognition (OCR). This article addresses the significance of focusing on such languages and introduces ViLanOCR, an innovative bilingual OCR system tailored for Urdu and English. Unlike existing systems, which struggle with the intricacies of low-resource languages, ViLanOCR leverages advanced multilingual transformer-based language models to achieve superior performances. The proposed approach is evaluated using the character error rate (CER) metric and achieves state-of-the-art results on the Urdu UHWR dataset, with a CER of 1.1%. The experimental results demonstrate the effectiveness of the proposed approach, surpassing state of the-art baselines in Urdu handwriting digitization.

Keywords