IEEE Access (Jan 2020)

Hybrid Deblur Net: Deep Non-Uniform Deblurring With Event Camera

  • Limeng Zhang,
  • Hongguang Zhang,
  • Jihua Chen,
  • Lei Wang

DOI
https://doi.org/10.1109/ACCESS.2020.3015759
Journal volume & issue
Vol. 8
pp. 148075 – 148083

Abstract

Read online

Despite CNN-based deblur models have shown their superiority when solving motion blurs, restoring a photorealistic image from severe motion blur remains an ill-posed problem due to the loss of temporal information and textures. Event cameras such as Dynamic and Active-pixel Vision Sensor (DAVIS) can simultaneously produce gray-scale Active Pixel Sensor (APS) frames and events, which can capture fast motions as events of very high temporal resolution, i. e., $1~\mu s$ , can provide extra information for blurry APS frames. Due to the natural noise and sparsity of events, we employ a recurrent encoder-decoder architecture to generate dense recurrent event representations, which encode the overall historical information. We concatenate the original blurry image with the event representation as our hybrid input, from which the network learns to restore the sharp output. We conduct extensive experiments on GoPro dataset and a real event blurry dataset captured by DAVIS240C. Our experimental results on both synthetic and real images demonstrate state-of-the-art performance for $1280\times 720 $ images at 30 fps.

Keywords