Mathematics (Jul 2024)

Bilingual–Visual Consistency for Multimodal Neural Machine Translation

  • Yongwen Liu,
  • Dongqing Liu,
  • Shaolin Zhu

DOI
https://doi.org/10.3390/math12152361
Journal volume & issue
Vol. 12, no. 15
p. 2361

Abstract

Read online

Current multimodal neural machine translation (MNMT) approaches primarily focus on ensuring consistency between visual annotations and the source language, often overlooking the broader aspect of multimodal coherence, including target–visual and bilingual–visual alignment. In this paper, we propose a novel approach that effectively leverages target–visual consistency (TVC) and bilingual–visual consistency (BiVC) to improve MNMT performance. Our method leverages visual annotations depicting concepts across bilingual parallel sentences to enhance multimodal coherence in translation. We exploit target–visual harmony by extracting contextual cues from visual annotations during auto-regressive decoding, incorporating vital future context to improve target sentence representation. Additionally, we introduce a consistency loss promoting semantic congruence between bilingual sentence pairs and their visual annotations, fostering a tighter integration of textual and visual modalities. Extensive experiments on diverse multimodal translation datasets empirically demonstrate our approach’s effectiveness. This visually aware, data-driven framework opens exciting opportunities for intelligent learning, adaptive control, and robust distributed optimization of multi-agent systems in uncertain, complex environments. By seamlessly fusing multimodal data and machine learning, our method paves the way for novel control paradigms capable of effectively handling the dynamics and constraints of real-world multi-agent applications.

Keywords