Complex & Intelligent Systems (Dec 2022)

Attention-guided video super-resolution with recurrent multi-scale spatial–temporal transformer

  • Wei Sun,
  • Xianguang Kong,
  • Yanning Zhang

DOI
https://doi.org/10.1007/s40747-022-00944-x
Journal volume & issue
Vol. 9, no. 4
pp. 3989 – 4002

Abstract

Read online

Abstract Video super-resolution (VSR) aims to recover the high-resolution (HR) contents from the low-resolution (LR) observations relying on compositing the spatial–temporal information in the LR frames. It is crucial to propagate and aggregate spatial–temporal information. Recently, while transformers show impressive performance on high-level vision tasks, few attempts have been made on image restoration, especially on VSR. In addition, previous transformers simultaneously process spatial–temporal information, easily synthesizing confused textures and high computational cost limit its development. Towards this end, we construct a novel bidirectional recurrent VSR architecture. Our model disentangles the task of learning spatial–temporal information into two easier sub-tasks, each sub-task focuses on propagating and aggregating specific information with a multi-scale transformer-based design, which alleviates the difficulty of learning. Additionally, an attention-guided motion compensation module is applied to get rid of the influence of misalignment between frames. Experiments on three widely used benchmark datasets show that, relying on superior feature correlation learning, the proposed network can outperform previous state-of-the-art methods, especially for recovering the fine details.

Keywords