IET Biometrics (Jul 2021)

An exploratory study of interpretability for face presentation attack detection

  • Ana F. Sequeira,
  • Tiago Gonçalves,
  • Wilson Silva,
  • João Ribeiro Pinto,
  • Jaime S. Cardoso

DOI
https://doi.org/10.1049/bme2.12045
Journal volume & issue
Vol. 10, no. 4
pp. 441 – 455

Abstract

Read online

Abstract Biometric recognition and presentation attack detection (PAD) methods strongly rely on deep learning algorithms. Though often more accurate, these models operate as complex black boxes. Interpretability tools are now being used to delve deeper into the operation of these methods, which is why this work advocates their integration in the PAD scenario. Building upon previous work, a face PAD model based on convolutional neural networks was implemented and evaluated both through traditional PAD metrics and with interpretability tools. An evaluation on the stability of the explanations obtained from testing models with attacks known and unknown in the learning step is made. To overcome the limitations of direct comparison, a suitable representation of the explanations is constructed to quantify how much two explanations differ from each other. From the point of view of interpretability, the results obtained in intra and inter class comparisons led to the conclusion that the presence of more attacks during training has a positive effect in the generalisation and robustness of the models. This is an exploratory study that confirms the urge to establish new approaches in biometrics that incorporate interpretability tools. Moreover, there is a need for methodologies to assess and compare the quality of explanations.

Keywords