IEEE Access (Jan 2021)

An Energy-Efficient Edge Computing Paradigm for Convolution-Based Image Upsampling

  • Ian Colbert,
  • Kenneth Kreutz-Delgado,
  • Srinjoy Das

DOI
https://doi.org/10.1109/ACCESS.2021.3123938
Journal volume & issue
Vol. 9
pp. 147967 – 147984

Abstract

Read online

State-of-the-art deep learning solutions for image upsampling are currently trained using either resize or sub-pixel convolution to learn kernels that generate high fidelity images with minimal artifacts. However, performing inference with these learned convolution kernels requires memory-intensive feature map transformations that dominate time and energy costs in real-time applications. To alleviate this pressure on memory bandwidth, we propose a novel energy-efficient edge computing paradigm that confines the use of resize or sub-pixel convolution to training in the cloud by transforming learned convolution kernels to deconvolution kernels before deploying them for inference as a functionally equivalent deconvolution. These kernel transformations, intended as a one-time cost when shifting from training to inference, enable a systems designer to use each algorithm in their optimal context by preserving the image fidelity learned when training in the cloud while minimizing data transfer penalties during inference at the edge. We compare the inference properties of these convolution-based image upsampling algorithms and introduce a novel deconvolution inference algorithm, which we refer to as REVD2. To demonstrate the benefits of our approach, we upsample images selected from the BSD300 dataset using a pre-trained single-image super resolution network provided by the PyTorch model zoo. Using quantitative models of incurred time and energy costs to analyze this deep neural network, we estimate that using REVD2 for inference at the edge improves system latency by $2.1\times $ or $2.8\times $ and energy efficiency by $2.1\times $ or $2.7\times $ when respectively compared to sub-pixel or resize convolution counterparts.

Keywords