Sensors (Sep 2022)

A Review of Multi-Modal Learning from the Text-Guided Visual Processing Viewpoint

  • Ubaid Ullah,
  • Jeong-Sik Lee,
  • Chang-Hyeon An,
  • Hyeonjin Lee,
  • Su-Yeong Park,
  • Rock-Hyun Baek,
  • Hyun-Chul Choi

DOI
https://doi.org/10.3390/s22186816
Journal volume & issue
Vol. 22, no. 18
p. 6816

Abstract

Read online

For decades, co-relating different data domains to attain the maximum potential of machines has driven research, especially in neural networks. Similarly, text and visual data (images and videos) are two distinct data domains with extensive research in the past. Recently, using natural language to process 2D or 3D images and videos with the immense power of neural nets has witnessed a promising future. Despite the diverse range of remarkable work in this field, notably in the past few years, rapid improvements have also solved future challenges for researchers. Moreover, the connection between these two domains is mainly subjected to GAN, thus limiting the horizons of this field. This review analyzes Text-to-Image (T2I) synthesis as a broader picture, Text-guided Visual-output (T2Vo), with the primary goal being to highlight the gaps by proposing a more comprehensive taxonomy. We broadly categorize text-guided visual output into three main divisions and meaningful subdivisions by critically examining an extensive body of literature from top-tier computer vision venues and closely related fields, such as machine learning and human–computer interaction, aiming at state-of-the-art models with a comparative analysis. This study successively follows previous surveys on T2I, adding value by analogously evaluating the diverse range of existing methods, including different generative models, several types of visual output, critical examination of various approaches, and highlighting the shortcomings, suggesting the future direction of research.

Keywords