Jisuanji kexue (Jun 2022)

Compression Algorithm of Face Recognition Model Based on Unlabeled Knowledge Distillation

  • CHENG Xiang-ming, DENG Chun-hua

DOI
https://doi.org/10.11896/jsjkx.210400023
Journal volume & issue
Vol. 49, no. 6
pp. 245 – 253

Abstract

Read online

When transplanting face recognition technology to mobile devices,it often needs to be processed by accelerated algorithms such as model compression.Knowledge distillation is a model compression method that has a wide range of practical applications and is easy to train.Existing knowledge distillation algorithms require a large amount of tagged face data,which may involve security issues such as identity privacy leakage.At the same time,the cost of large-scale collection of tagged face data is relatively high,while the massive amount of unlabeled face data that can be collected or generated cannot be used.In order to solve the above problems,this paper analyzes the characteristics of knowledge distillation in face recognition tasks,and proposes an indirect supervised training method of unlabeled knowledge distillation.This method can utilize massive amounts of unlabeled face data,thereby avoiding security risks such as privacy leakage.However,the data distribution of the unlabeled face data set is unpredictable,and there is the problem of uneven data distribution,which limits the performance of the indirect supervision algorithm.This research further proposes a data enhancement method for face content replacement,which balances the distribution of face data by replacing part of the content of the face,and at the same time enhances the diversity of face data.Sufficient experimental results show that when the face recognition model is greatly compressed,the performance of the algorithm in this research reaches an advanced level,and surpasses the large-scale network on the LFW data set.

Keywords