APL Machine Learning (Mar 2024)

VERI-D: A new dataset and method for multi-camera vehicle re-identification of damaged cars under varying lighting conditions

  • Shao Liu,
  • Sos S. Agaian

DOI
https://doi.org/10.1063/5.0183408
Journal volume & issue
Vol. 2, no. 1
pp. 016120 – 016120-14

Abstract

Read online

Vehicle re-identification (V-ReID) is a critical task that aims to match the same vehicle across images from different camera viewpoints. The previous studies have leveraged attribute clues, such as color, model, and license plate, to enhance the V-ReID performance. However, these methods often lack effective interaction between the global–local features and the final V-ReID objective. Moreover, they do not address the challenging issues in real-world scenarios, such as high viewpoint variations, extreme illumination conditions, and car appearance changes (e.g., due to damage or wrong driving). We propose a novel framework to tackle these problems and advance the research in V-ReID, which can handle various types of car appearance changes and achieve robust V-ReID under varying lighting conditions. Our main contributions are as follows: (i) we propose a new Re-ID architecture named global–local self-attention network, which integrates local information into the feature learning process and enhances the feature representation for V-ReID and (ii) we introduce a novel damaged vehicle Re-ID dataset called VERI-D, which is the first publicly available dataset that focuses on this challenging yet practical scenario. The dataset contains both natural and synthetic images of damaged vehicles captured from multiple camera viewpoints and under different lighting conditions. (iii) We conduct extensive experiments on the VERI-D dataset and demonstrate the effectiveness of our approach in addressing the challenges associated with damaged vehicle re-identification. We also compare our method to several state-of-the-art V-ReID methods and show its superiority.