IEEE Access (Jan 2021)

Differentiable Forward and Backward Fixed-Point Iteration Layers

  • Younghan Jeon,
  • Minsik Lee,
  • Jin Young Choi

DOI
https://doi.org/10.1109/ACCESS.2021.3053764
Journal volume & issue
Vol. 9
pp. 18383 – 18392

Abstract

Read online

Recently, several studies have proposed methods to utilize some classes of optimization problems in designing deep neural networks to encode constraints that conventional layers cannot capture. However, these methods are still in their infancy and require special treatments, such as the analysis of the Karush-Kuhn-Tucker (KKT) condition, to derive the backpropagation formula. In this paper, we propose a new formulation called the fixed-point iteration (FPI) layer, which facilitates the use of more complicated operations in deep networks. The backward FPI layer, which is motivated by the recurrent backpropagation (RBP) algorithm, is also proposed. However, in contrast to RBP, the backward FPI layer yields the gradient using a small network module without explicitly calculating the Jacobian. In actual applications, both forward and backward FPI layers can be treated as nodes in the computational graphs. All the components of our method are implemented at a high level of abstraction, which allows efficient higher-order differentiations on the nodes. In addition, we present two practical methods, the neural net FPI (FPI_NN) layer and the gradient descent FPI (FPI_GD) layer, whereby the FPI update operations are a small neural network module and a single gradient descent step based on a learnable cost function, respectively. FPI_NN is intuitive and simple, while FPI_GD can be used to efficient train energy function networks that have been studied recently. While RBP and related studies have not been applied to practical examples, our experiments show that the FPI layer can be successfully applied to real-world problems such as image denoising, optical flow, and multi-label classification.

Keywords