Applied Sciences (Jun 2018)

Image Captioning with Word Gate and Adaptive Self-Critical Learning

  • Xinxin Zhu,
  • Lixiang Li,
  • Jing Liu,
  • Longteng Guo,
  • Zhiwei Fang,
  • Haipeng Peng,
  • Xinxin Niu

DOI
https://doi.org/10.3390/app8060909
Journal volume & issue
Vol. 8, no. 6
p. 909

Abstract

Read online

Although the policy-gradient methods for reinforcement learning have shown significant improvement in image captioning, how to achieve high performance during the reinforcement optimizing process is still not a simple task. There are at least two difficulties: (1) The large size of vocabulary leads to a large action space, which makes it difficult for the model to accurately predict the current word. (2) The large variance of gradient estimation in reinforcement learning usually causes severe instabilities in the training process. In this paper, we propose two innovations to boost the performance of self-critical sequence training (SCST). First, we modify the standard long short-term memory (LSTM)based decoder by introducing a gate function to reduce the search scope of the vocabulary for any given image, which is termed the word gate decoder. Second, instead of only considering current maximum actions greedily, we propose a stabilized gradient estimation method whose gradient variance is controlled by the difference between the sampling reward from the current model and the expectation of the historical reward. We conducted extensive experiments, and results showed that our method could accelerate the training process and increase the prediction accuracy. Our method was validated on MS COCO datasets and yielded state-of-the-art performance.

Keywords