IEEE Access (Jan 2023)

Self-Attention and MLP Auxiliary Convolution for Face Anti-Spoofing

  • Hanqing Gu,
  • Jiayin Chen,
  • Fusu Xiao,
  • Yi-Jia Zhang,
  • Zhe-Ming Lu

DOI
https://doi.org/10.1109/ACCESS.2023.3335040
Journal volume & issue
Vol. 11
pp. 131152 – 131167

Abstract

Read online

Face features, as the most widely adopted and essential biometric characteristic in identity verification and recognition, play a crucial role in ensuring security. However, the significance is also accompanied by various face attacks, posing a great threat to the security of facial recognition systems. Therefore, exploring face anti-spoofing detection holds substantial practical significance. There is existing research on Face Anti-Spoofing (FAS) detection, but there is still potential for enhancing detection performance. In this paper, we propose a face anti-spoofing method, named AR-MLP (Attention ResNet-Multilayer Perceptron), which incorporates self-attention and MLP auxiliary convolution. By replacing the last two “Basic block” layers of the Conv-MLP network with a model combining self-attention and convolution, and adjusting the network architecture and parameters, AR-MLP can capture global features more effectively by suitably integrating the self-attention and convolution layers. We conduct comprehensive evaluations using three authoritative multimodal face anti-spoofing datasets (WMCA, HQ-WMCA, and CASIA-SURF CeFA) for intra-dataset, cross-attack, and cross-race testing. The experimental results demonstrate that AR-MLP outperforms several excellent methods in terms of classification performance and computational overhead.

Keywords