IEEE Access (Jan 2020)

Video Deblurring via Temporally and Spatially Variant Recurrent Neural Network

  • Runhua Jiang,
  • Li Zhao,
  • Tao Wang,
  • Jinxin Wang,
  • Xiaoqin Zhang

DOI
https://doi.org/10.1109/ACCESS.2019.2962505
Journal volume & issue
Vol. 8
pp. 7587 – 7597

Abstract

Read online

The camera shake and high-speed motion of objects often produce a blurry video. However, it is hard to recover sharp videos using existing single or multiple image deblurring methods, as the blur artifacts in blurry videos are both temporally and spatially varying. In this paper, we propose a temporally and spatially variant recurrent neural network for video deblurring, in which both temporally and spatially variants employ ConvGRU blocks and a weight generator to capture spatio-temporal features. Meanwhile, the proposed model is trained in an end-to-end manner, where the model input and output are set to the same number. Thus, our model does not reduce the number of frames both in training and testing stages, which is important in practical applications. Quantitative and qualitative evaluations on standard benchmark datasets demonstrate that the proposed method outperforms the current state-of-the-art methods.

Keywords