IEEE Access (Jan 2020)

Non-Visual to Visual Translation for Cross-Domain Face Recognition

  • Han Byeol Bae,
  • Taejae Jeon,
  • Yongju Lee,
  • Sungjun Jang,
  • Sangyoun Lee

DOI
https://doi.org/10.1109/ACCESS.2020.2980047
Journal volume & issue
Vol. 8
pp. 50452 – 50464

Abstract

Read online

Reducing the cross-modality gap between two different domains is a challenging problem for heterogeneous face recognition (HFR). The current visual domain face recognition system is not easy to solve the discrepancy of cross-modality when two comparing domains are heterogeneous. Moreover, the amount of HFR dataset is significantly insufficient, making it considerable difficulty in training. This paper proposes a novel two-step framework that consists of the image translation module and the feature learning module to obtain an enhanced cross-modality matching system for heterogeneous datasets. First, the image translation module consists of a Preprocessing Chain (PC) method, CycleGAN, and the Siamese network. It enables to meet the conditions for preserving contents along with changing the styles from the source domain to the target domain. Second, in the feature learning module, the training dataset and its translated images are used together for fine-tuning the pre-trained backbone model in the visual domain. This allows for discriminative and robust feature matching of the probe and gallery test datasets in the visual domain. The experimental results are evaluated with two scenarios, using the CUHK Face Sketch FERET (CUFSF) dataset and the CASIA NIR-VIS 2.0 dataset. The proposed method achieves a better recognition performance in comparison to the state-of-the-art methods.

Keywords