EURASIP Journal on Image and Video Processing (Aug 2024)

A method for image–text matching based on semantic filtering and adaptive adjustment

  • Ran Jin,
  • Tengda Hou,
  • Tao Jin,
  • Jie Yuan,
  • Chenjie Du

DOI
https://doi.org/10.1186/s13640-024-00639-y
Journal volume & issue
Vol. 2024, no. 1
pp. 1 – 17

Abstract

Read online

Abstract As image–text matching (a critical task in the field of computer vision) links cross-modal data, it has captured extensive attention. Most of the existing methods intended for matching images and texts explore the local similarity levels between images and sentences to align images with texts. Even though this fine-grained approach has remarkable gains, how to further mine the deep semantics between data pairs and focus on the essential semantics in data remains to be quested. In this work, a new semantic filtering and adaptive approach (FAAR) was proposed to ease the above problem. To be specific, the filtered attention (FA) module selectively focuses on typical alignments with the interference of meaningless comparisons eliminated. Next, the adaptive regulator (AR) further adjusts the attention weights of key segments for filtered regions and words. The superiority of our proposed method was validated by a number of qualitative experiments and analyses on the Flickr30K and MSCOCO data sets.

Keywords