IEEE Access (Jan 2024)

Masked Face Recognition With Generated Occluded Part Using Image Augmentation and CNN Maintaining Face Identity

  • Susanta Malakar,
  • Werapon Chiracharit,
  • Kosin Chamnongthai

DOI
https://doi.org/10.1109/ACCESS.2024.3446652
Journal volume & issue
Vol. 12
pp. 126356 – 126375

Abstract

Read online

Face masks pose challenges for face recognition due to missing features. Current methods primarily rely on inpainting and reconstruction to improve recognition, but reconstructed images often lose identity and have low face similarity. This happens because the newly reconstructed features either come from other persons or are newly generated features. This paper proposes a method to improve the SSIM (Structural Similarity Index Measure) value, face recognition accuracy, and identity preservation by augmenting only the lower part of masked face images, rather than generating the entire face. It first analyzes several masked face images to detect the occluded area and learns the line between the visible and non-visible parts. Then it creates two datasets containing the upper part and lower parts of faces. A pre-trained CNN matches feature maps of the upper part of a query image with upper part dataset images to find a candidate image. Various techniques such as SURF are utilized to detect the geometrical property difference which is applied to the lower part of the candidate image to form the final full-face image. Our system’s performance was evaluated using LFW, CASIA-WebFace, AR, and FACES datasets. Accuracy, precision, recall, and F-1 score were calculated and compared with conventional methods. The proposed method enhanced recognition accuracy by 4-6% and significantly increased the SSIM value. It also demonstrated greater convenience, reduced runtime, and lower computational costs compared to existing methods.

Keywords