IEEE Access (Jan 2018)

Multi-View Transformation via Mutual-Encoding InfoGenerative Adversarial Networks

  • Liang Sun,
  • Wenjing Kang,
  • Yuxuan Han,
  • Hongwei Ge

DOI
https://doi.org/10.1109/ACCESS.2018.2845696
Journal volume & issue
Vol. 6
pp. 43315 – 43326

Abstract

Read online

The problem of multi-view transformation is associated with transforming available source views of a given object into unknown target views. To solve this problem, a Mutual-Encoding InfoGenerative Adversarial Networks (MEIGANs)-based algorithm is proposed in this paper. A mutual-encoding representation learning network is proposed to obtain multi-view representations, i.e., it guarantees through encoders different views of the same object are mapped to the common representation, which carries enough information with respect to the object itself. An InfoGenerative Adversarial Networks-based transformation network is proposed to transform multi-views of the given object, which carries the representation information in the generative models and discriminative models, guaranteeing the synthetic transformed view matches the source view. The advantages of the MEIGAN are that it bypasses direct mappings among different views, and can solve the problem of missing views in training data and the problem of mapping between transformed views and source views. Finally, experiments on incomplete data to complete data restoration tasks on MNIST, CelebA, and multi-view angle transformation tasks on 3-D rendered chairs and multi-view clothing show the proposed algorithm yields satisfactory transformation results.

Keywords