ICTACT Journal on Image and Video Processing (Feb 2021)

AN END-TO-END TRAINABLE CAPSULE NETWORK FOR IMAGE-BASED CHARACTER RECOGNITION AND ITS APPLICATION TO VIDEO SUBTITLE RECOGNITION

  • Ahmed Tibermacine,
  • Selmi Mohamed Amine

DOI
https://doi.org/10.21917/ijivp.2021.0339
Journal volume & issue
Vol. 11, no. 3
pp. 2378 – 2384

Abstract

Read online

The text presented in videos contains important information for a wide range of vision-based applications. The key modules for extracting this information include detection of text followed by its recognition, which are the subject of our study. In this paper, we propose an innovative end-to-end subtitle detection and recognition system for videos. Our system consists of three modules. Video subtitle are firstly detected by a novel image operator based on our blob extraction method. Then, the video subtitle is individually segmented as single characters by simple technique on the binary image and then passed to recognition module. Lastly, Capsule neural network (CapsNet) trained on Chars74K dataset is adopted for recognizing characters. The proposed detection method is robust and has good performance on video subtitle detection, which was evaluated on dataset we constructed. In addition, CapsNet show its validity and effectiveness for recognition of video subtitle. To the best of our knowledge, this is the first work that capsule networks have been empirically investigated for Character recognition of video subtitles.

Keywords