IEEE Access (Jan 2019)

A Very Deep Spatial Transformer Towards Robust Single Image Super-Resolution

  • Jianmin Jiang,
  • Hossam M. Kasem,
  • Kwok-Wai Hung

DOI
https://doi.org/10.1109/ACCESS.2019.2908996
Journal volume & issue
Vol. 7
pp. 45618 – 45631

Abstract

Read online

In general, existing research on single image super-resolution does not consider the practical application that, when image transmission is over noisy channels, the effect of any possible geometric transformations could incur significant quality loss and distortions. To address this problem, we present a new and robust super-resolution method in this paper, where a robust spatially-transformed deep learning framework is established to simultaneously perform both the geometric transformation and the single image super-resolution. The proposed seamlessly integrates deep residual learning based spatial transform module with a very deep super-resolution module to achieve a robust and improved single image super-resolution. In comparison with the existing state of the arts, our proposed robust single image super-resolution has a number of novel features, including 1) content-characterized deep features are extracted out of the input LR images to identify the incurred geometric transformations, and hence transformation parameters can be optimized to influence and control the super-resolution process; 2) the effects of any geometric transformations can be automatically corrected at the output without compromise on the quality of final super-resolved images; and 3) compared with the existing research reported in the literature, our proposed achieves the advantage that HR images can be recovered from those down-sampled LR images corrupted by a number of different geometric transformations. The extensive experiments, measured by both the peak-signal-to-noise-ratio and the similar structure index measurement, show that our proposed method achieves a high level of robustness against a number of geometric transformations, including scaling, translations, and rotations. Benchmarked by the existing state-of-the-arts SR methods, our proposed delivers superior performances on a wide range of datasets which are publicly available and widely adopted across relevant research communities.

Keywords