Intelligent Systems with Applications (Nov 2022)
Face presentation attack identification optimization with adjusting convolution blocks in VGG networks
Abstract
Advancement in deep learning is mapping to every field of life and applying it to almost all research problems. Numerous Deep Convolutional Neural Network (DCNN) architectures are being proposed, giving different results based on the depth and value of hyperparameters. The entire development of such DCNN architectures from scratch needs a lot of effort, and such architectures may not be used for other applications than the one they are structured for. Transfer learning is a way to modify these pre-trained networks to make them suitable for newer diverse applications. This paper attempts to empirically assess the performance and suitability of existing pre-trained DCNN architectures for human face liveness detection. Due to the advent of ambient computing for contactless identification of humans using their biometric traits, human face liveness detection proves to be an important research area. Six pre-trained DCNN models, alias VGG16, VGG19, DensNet121, Xception, MobileNet, and InceptionV3, are considered for empirical assessment in human face liveness detection. The method is explored using two face liveness detection datasets - NUAA and Replay-Attack. Face Liveness Accuracy, and Half Total Error Rate (HTER) are considered prime performance evaluation metrics. At a learning rate of 10−4, the VGG19 network with scenario “Original VGG” gives the highest face liveness detection accuracy, which is the outcome of current research.