EURASIP Journal on Image and Video Processing (Jan 2020)

Adversarial attacks on fingerprint liveness detection

  • Jianwei Fei,
  • Zhihua Xia,
  • Peipeng Yu,
  • Fengjun Xiao

DOI
https://doi.org/10.1186/s13640-020-0490-z
Journal volume & issue
Vol. 2020, no. 1
pp. 1 – 11

Abstract

Read online

Abstract Deep neural networks are vulnerable to adversarial samples, posing potential threats to the applications deployed with deep learning models in practical conditions. A typical example is the fingerprint liveness detection module in fingerprint authentication systems. Inspired by great progress of deep learning, deep networks-based fingerprint liveness detection algorithms spring up and dominate the field. Thus, we investigate the feasibility of deceiving state-of-the-art deep networks-based fingerprint liveness detection schemes by leveraging this property in this paper. Extensive evaluations are made with three existing adversarial methods: FGSM, MI-FGSM, and Deepfool. We also proposed an adversarial attack method that enhances the robustness of adversarial fingerprint images to various transformations like rotations and flip. We demonstrate these outstanding schemes are likely to classify fake fingerprints as live fingerprints by adding tiny perturbations, even without internal details of their used model. The experimental results reveal a big loophole and threats for these schemes from a view of security, and enough attention is urgently needed to be paid on anti-adversarial not only in fingerprint liveness detection but also in all deep learning applications.

Keywords