Sensors (May 2022)

Single Image Video Prediction with Auto-Regressive GANs

  • Jiahui Huang,
  • Yew Ken Chia,
  • Samson Yu,
  • Kevin Yee,
  • Dennis Küster,
  • Eva G. Krumhuber,
  • Dorien Herremans,
  • Gemma Roig

DOI
https://doi.org/10.3390/s22093533
Journal volume & issue
Vol. 22, no. 9
p. 3533

Abstract

Read online

In this paper, we introduce an approach for future frames prediction based on a single input image. Our method is able to generate an entire video sequence based on the information contained in the input frame. We adopt an autoregressive approach in our generation process, i.e., the output from each time step is fed as the input to the next step. Unlike other video prediction methods that use “one shot” generation, our method is able to preserve much more details from the input image, while also capturing the critical pixel-level changes between the frames. We overcome the problem of generation quality degradation by introducing a “complementary mask” module in our architecture, and we show that this allows the model to only focus on the generation of the pixels that need to be changed, and to reuse those that should remain static from its previous frame. We empirically validate our methods against various video prediction models on the UT Dallas Dataset, and show that our approach is able to generate high quality realistic video sequences from one static input image. In addition, we also validate the robustness of our method by testing a pre-trained model on the unseen ADFES facial expression dataset. We also provide qualitative results of our model tested on a human action dataset: The Weizmann Action database.

Keywords