IEEE Access (Jan 2024)

Deep Learning Based Multi Pose Human Face Matching System

  • Muhammad Sohail,
  • Ijaz Ali Shoukat,
  • Abd Ullah Khan,
  • Haram Fatima,
  • Mohsin Raza Jafri,
  • Muhammad Azfar Yaqub,
  • Antonio Liotta

DOI
https://doi.org/10.1109/ACCESS.2024.3366451
Journal volume & issue
Vol. 12
pp. 26046 – 26061

Abstract

Read online

Current techniques for multi-pose human face matching yield suboptimal outcomes because of the intricate nature of pose equalization and face rotation. Deep learning models, such as YOLO-V5, etc., that have been proposed to tackle these complexities, suffer from slow frame matching speeds and therefore exhibit low face recognition accuracy. Concerning this, certain literature investigated multi-pose human face detection systems; however, those studies are of elementary level and do not adequately analyze the utility of those systems. To fill this research gap, we propose a real-time face matching algorithm based on YOLO-V5. Our algorithm utilizes multi-pose human patterns and considers various face orientations, including organizational faces and left, right, top, and bottom alignments, to recognize multiple aspects of people. Using face poses, the algorithm identifies face positions in a dataset of images obtained from mixed pattern live streams, and compares faces with a specific piece of the face that has a relatively similar spectrum for matching with a given dataset. Once a match is found, the algorithm displays the face on Google Colab, collected during the learning phase with the Robo-flow key, and tracks it using the YOLO-V5 face monitor. Alignment variations are broken up into different positions, where each type of face is uniquely learned to have its own study demonstrated. This method offers several benefits for identifying and monitoring humans using their labeling tag as a pattern name, including high face-matching accuracy and minimum speed of owing face-to-pose variations. Furthermore, the algorithm addresses the face rotation issue by introducing a mixture of error functions for execution time, accuracy loss, frame-wise failure, and identity loss, attempting to guide the authenticity of the produced image frame. Experimental results confirm effectiveness of the algorithm in terms of improved accuracy and reduced delay in the face-matching paradigm.

Keywords