IEEE Access (Jan 2023)

GhostFaceNets: Lightweight Face Recognition Model From Cheap Operations

  • Mohamad Alansari,
  • Oussama Abdul Hay,
  • Sajid Javed,
  • Abdulhaid Shoufan,
  • Yahya Zweiri,
  • Naoufel Werghi

DOI
https://doi.org/10.1109/ACCESS.2023.3266068
Journal volume & issue
Vol. 11
pp. 35429 – 35446

Abstract

Read online

The development of deep learning-based biometric models that can be deployed on devices with constrained memory and computational resources has proven to be a significant challenge. Previous approaches to this problem have not prioritized the reduction of feature map redundancy, but the introduction of Ghost modules represents a major innovation in this area. Ghost modules use a series of inexpensive linear transformations to extract additional feature maps from a set of intrinsic features, allowing for a more comprehensive representation of the underlying information. GhostNetV1 and GhostNetV2, both of which are based on Ghost modules, serve as the foundation for a group of lightweight face recognition models called GhostFaceNets. GhostNetV2 expands upon the original GhostNetV1 by adding an attention mechanism to capture long-range dependencies. Evaluation of GhostFaceNets using various benchmarks reveals that these models offer superior performance while requiring a computational complexity of approximately 60–275 MFLOPs. This is significantly lower than that of State-Of-The-Art (SOTA) big convolutional neural network (CNN) models, which can require hundreds of millions of FLOPs. GhostFaceNets trained with the ArcFace loss on the refined MS-Celeb-1M dataset demonstrate SOTA performance on all benchmarks. In comparison to previous SOTA mobile CNNs, GhostFaceNets greatly improve efficiency for face verification tasks. The GhostFaceNets code is available at: https://github.com/HamadYA/GhostFaceNets.

Keywords