IEEE Access (Jan 2020)

Differential Detection of Facial Retouching: A Multi-Biometric Approach

  • C. Rathgeb,
  • C.-I. Satnoianu,
  • N. E. Haryanto,
  • K. Bernardo,
  • C. Busch

DOI
https://doi.org/10.1109/ACCESS.2020.3000254
Journal volume & issue
Vol. 8
pp. 106373 – 106385

Abstract

Read online

Facial retouching apps have become common tools which are frequently applied to improve one's facial appearance, e.g. before sharing face images via social media. Beautification induced by retouching has the ability to substantially alter the appearance of face images and hence might represent a challenge for face recognition. Towards deploying secure face recognition as well as enforcing anti-photoshop legislations, a robust and reliable detection of retouched face image is needed. Published approaches consider a single image-based (no-reference) scenario where a potentially retouched face image serves as sole input to the retouching detector. However, in many cases a trusted unaltered face image of a subject examined is available which enables an image pair-based (differential) detection scheme. In this work, ICAO-compliant subsets of the FERET and FRGCv2 face databases are used to automatically create a database containing 9,078 retouched face images together with unconstrained probe images. In evaluations employing the commercial Cognitec FaceVACS and the open-source ArcFace face recognition system, it is shown that facial retouching can negatively impact face recognition performance. Further, a differential facial retouching detection system is proposed which processes pairs of a potentially retouched reference image and corresponding unaltered probe image of single subjects. Estimated differences in feature vectors obtained from texture descriptors, facial landmarks, and deep face representations are leveraged by machine learning-based classifiers of which the detection scores are fused to distinguish between retouched and unaltered face images. The proposed scheme is evaluated in a cross-database scenario where training and testing are performed on the FERET and FRGCv2 databases and vice versa. In the scenario where the used retouching algorithm is known by the detection algorithm, a competitive average D-EER of approximately 2% is achieved. Further, the scenario in which the employed retouching algorithm is not known by the detection algorithm is evaluated. In the latter scenario, the proposed approach obtains an average D-EER below 10% and is shown to outperform several state-of-the-art single image-based detection schemes.

Keywords