Cognitive Computation and Systems (Dec 2022)

PruneFaceDet: Pruning lightweight face detection network by sparsity training

  • Nanfei Jiang,
  • Zhexiao Xiong,
  • Hui Tian,
  • Xu Zhao,
  • Xiaojie Du,
  • Chaoyang Zhao,
  • Jinqiao Wang

DOI
https://doi.org/10.1049/ccs2.12065
Journal volume & issue
Vol. 4, no. 4
pp. 391 – 399

Abstract

Read online

Abstract Face detection is the basic step of many face analysis tasks. In practice, face detectors usually run on mobile devices with limited memory and computing resources. Therefore, it is important to keep face detectors lightweight. To this end, current methods usually focus on directly designing lightweight detectors. Nevertheless, it is not fully explored whether the resource consumption of these lightweight detectors can be further suppressed without too much sacrifice on accuracy. In this study, we propose to apply the network pruning method to the lightweight face detection network, to further reduce its parameters and floating point operations. To identify the channels of less importance, we perform network training with sparsity regularisation on channel scaling factors of each layer. Then, we remove the connections and corresponding weights with near‐zero scaling factors after sparsity training. We apply the proposed pruning pipeline to a state‐of‐the‐art face detection method, EagleEye, and get a shrunken EagleEye model, which has a reduced number of computing operations and parameters. The shrunken model achieves comparable accuracy as the unpruned model. By using the proposed method, the shrunken EagleEye achieves a 56.3% reduction of parameter size with almost no accuracy loss on the WiderFace dataset.

Keywords