IET Image Processing (Oct 2024)

Spatial guided image captioning: Guiding attention with object's spatial interaction

  • Runyan Du,
  • Wenkai Zhang,
  • Shuoke Li,
  • Jialiang Chen,
  • Zhi Guo

DOI
https://doi.org/10.1049/ipr2.13124
Journal volume & issue
Vol. 18, no. 12
pp. 3368 – 3380

Abstract

Read online

Abstract Nowadays relational position embedding is widely used in many large multi‐modal models. It begins with relational captioning (a branch of image captioning) and contains two procedures: geometric modelling and prior attention. However, there are some problems that remain unsolved in the conventional procedures. This paper reviews the shortcomings of geometric modelling and prior attention. Then, a new framework called relational guided transformer (RGT) is proposed to verify the authors' conclusion from the origin of relational position embedding—relational captioning. Specifically, RGT has two simple but effective improvements in geometric modelling and prior attention: (1) A machine‐learned geometric modelling strategy called multi‐task geometric modelling (MTG) is used under multi‐task learning, replacing the original hand‐made geometric feature. (2) The effectiveness of multiple kinds of prior attention is discussed and preserved in a better form, which is called spatial guided attention (SGA) to integrate the geometric prior knowledge. Extensive experiments on MSCOCO and Flickr30k have been performed to investigate the effectiveness of each module and prove our argument. The superiority of the model comparing to the authors' baseline has also been proven on the offline evaluation with the “Karpathy” test split of both datasets.

Keywords