IEEE Access (Jan 2020)

Makeup Presentation Attacks: Review and Detection Performance Benchmark

  • Christian Rathgeb,
  • Pawel Drozdowski,
  • Christoph Busch

DOI
https://doi.org/10.1109/ACCESS.2020.3044723
Journal volume & issue
Vol. 8
pp. 224958 – 224973

Abstract

Read online

The application of facial cosmetics may cause substantial alterations in the facial appearance, which can degrade the performance of facial biometrics systems. Additionally, it was recently demonstrated that makeup can be abused to launch so-called makeup presentation attacks. More precisely, an attacker might apply heavy makeup to obtain the facial appearance of a target subject with the aim of impersonation or to conceal their own identity. We provide a comprehensive survey of works related to the topic of makeup presentation attack detection, along with a critical discussion. Subsequently, we assess the vulnerability of a commercial off-the-shelf and an open-source face recognition system against makeup presentation attacks. Specifically, we focus on makeup presentation attacks with the aim of impersonation employing the publicly available Makeup Induced Face Spoofing (MIFS) and Disguised Faces in the Wild (DFW) databases. It is shown that makeup presentation attacks might seriously impact the security of face recognition systems. Further, we propose different image pair-based, i.e. differential, attack detection schemes which analyse differences in feature representations obtained from potential makeup presentation attacks and corresponding target face images. The proposed detection systems employ various types of feature extractors including texture descriptors, facial landmarks, and deep (face) representations. To distinguish makeup presentation attacks from genuine, i.e. bona fide presentations, machine learning-based classifiers are used. The classifiers are trained with a large number of synthetically generated makeup presentation attacks utilising a generative adversarial network for facial makeup transfer in conjunction with image warping. Experimental evaluations conducted using the MIFS database and a subset of the DFW database reveal that deep face representations achieve competitive detection equal error rates of 0.7% and 1.8%, respectively.

Keywords