PeerJ Computer Science (May 2023)

The multi-modal fusion in visual question answering: a review of attention mechanisms

  • Siyu Lu,
  • Mingzhe Liu,
  • Lirong Yin,
  • Zhengtong Yin,
  • Xuan Liu,
  • Wenfeng Zheng

DOI
https://doi.org/10.7717/peerj-cs.1400
Journal volume & issue
Vol. 9
p. e1400

Abstract

Read online Read online

Visual Question Answering (VQA) is a significant cross-disciplinary issue in the fields of computer vision and natural language processing that requires a computer to output a natural language answer based on pictures and questions posed based on the pictures. This requires simultaneous processing of multimodal fusion of text features and visual features, and the key task that can ensure its success is the attention mechanism. Bringing in attention mechanisms makes it better to integrate text features and image features into a compact multi-modal representation. Therefore, it is necessary to clarify the development status of attention mechanism, understand the most advanced attention mechanism methods, and look forward to its future development direction. In this article, we first conduct a bibliometric analysis of the correlation through CiteSpace, then we find and reasonably speculate that the attention mechanism has great development potential in cross-modal retrieval. Secondly, we discuss the classification and application of existing attention mechanisms in VQA tasks, analysis their shortcomings, and summarize current improvement methods. Finally, through the continuous exploration of attention mechanisms, we believe that VQA will evolve in a smarter and more human direction.

Keywords