IET Image Processing (May 2024)

Cross‐modal knowledge learning with scene text for fine‐grained image classification

  • Li Xiong,
  • Yingchi Mao,
  • Zicheng Wang,
  • Bingbing Nie,
  • Chang Li

DOI
https://doi.org/10.1049/ipr2.13039
Journal volume & issue
Vol. 18, no. 6
pp. 1447 – 1459

Abstract

Read online

Abstract Scene text in natural images carries additional semantic information to aid in image classification. Existing methods lack full consideration of the deep understanding of the text and the visual text relationship, which results in the difficult to judge the semantic accuracy and the relevance of the visual text. This paper proposes image classification based on Cross modal Knowledge Learning of Scene Text (CKLST) method. CKLST consists of three stages: cross‐modal scene text recognition, text semantic enhancement, and visual‐text feature alignment. In the first stage, multi‐attention is used to extract features layer by layer, and a self‐mask‐based iterative correction strategy is utilized to improve the scene text recognition accuracy. In the second stage, knowledge features are extracted using external knowledge and are fused with text features to enhance text semantic information. In the third stage, CKLST realizes visual‐text feature alignment across attention mechanisms with a similarity matrix, thus the correlation between images and text can be captured to improve the accuracy of the image classification tasks. On Con‐Text dataset, Crowd Activity dataset, Drink Bottle dataset, and Synth Text dataset, CKLST can perform significantly better than other baselines on fine‐grained image classification, with improvements of 3.54%, 5.37%, 3.28%, and 2.81% over the best baseline in mAP, respectively.

Keywords