IEEE Access (Jan 2024)

DaliID: Distortion-Adaptive Learned Invariance for Identification—A Robust Technique for Face Recognition and Person Re-Identification

  • Wes Robbins,
  • Gabriel Bertocco,
  • Terrance E. Boult

DOI
https://doi.org/10.1109/ACCESS.2024.3385782
Journal volume & issue
Vol. 12
pp. 55784 – 55799

Abstract

Read online

In real-world applications, face recognition, and person re-identification are subject to image degradations such as motion blur, atmospheric turbulence, or upsampling artifacts—which are known to lower performance. This work directly addresses challenges in low-quality scenarios with 1) practical, novel updates to training and inference, which improve robustness to realistic distortions in face recognition and person re-identification and 2) new datasets for long-distance recognition. We propose a method that progressively learns from images prone to soft and strong distortions caused mainly by atmospheric turbulence. The method has a novel distortion loss to improve robustness, which is empirically shown to be highly effective in low-quality scenarios. Two further strategies are proposed to integrate distortion augmentation while also retaining the highest performance in high-quality scenarios. First, during training, an adaptive weighting schedule, which leverages the construction of different levels of distortion augmentation, is used to train the model in an easy-to-hard manner. The second, at inference, is a magnitude-weighted fusion of features from the parallel models used to retain the highest robustness across both high-quality and low-quality imagery. Different from prior work, our model does not leverage any image restoration or style transfer technique, and we are the first to employ explicit distortion weighting during training and evaluation. Our model achieves the best performances compared to prior works on face recognition and person re-identification benchmarks, including IARPA Janus Benchmark-S (IJB-S), TinyFace, DeepChange, Multi-Scene Multi-Time 2017 (MSMT17), and our novel long-distance datasets.

Keywords