IEEE Access (Jan 2019)

Video Super Resolution via Deep Global-Aware Network

  • Kwok-Wai Hung,
  • Chaoming Qiu,
  • Jianmin Jiang

DOI
https://doi.org/10.1109/ACCESS.2019.2920774
Journal volume & issue
Vol. 7
pp. 74711 – 74720

Abstract

Read online

Video super-resolution aims to increase the resolution of videos by exploiting the intra-frame and inter-frame dependencies of the low-resolution video sequences. There are usually two dependent steps in the video super-resolution: the motion compensation and the super-resolution reconstruction. In this paper, we propose a new deep learning framework without the explicit motion estimation by utilizing the self-attention model to exploit the full receptive field of the input video frames. In other words, the proposed deep neural network extracts the local features at all spatial-temporal locations for combining into global features using the self-attention networks in order to reconstruct the high-resolution video frame. The proposed global-aware network outperforms the state-of-the-art deep learning-based image and video super-resolution algorithms in terms of subjective and objective quality with less computational operations, as verified by extensive experiments on public image and video datasets, including Set5, Set14, B100, Urban100, and Vid4.

Keywords