IEEE Access (Jan 2024)
Enhancing Security in Multimodal Biometric Fusion: Analyzing Adversarial Attacks
Abstract
Biometric recognition has become essential for secure and reliable access control in high-security systems such as surveillance, law enforcement, and smart cities. While deep learning models offer exceptional performance in biometric recognition, they are susceptible to security challenges such as adversarial attacks. Current research addresses the vulnerability of single modalities systems. However, there is a gap in understanding the impact of adversarial attacks on fusion levels in multimodal biometric systems. This research aims to fill this gap by thoroughly assessing the vulnerability of multimodal biometric systems to adversarial attacks to enhance their security in real-world high-security applications. This research investigates the most secure fusion level for combining behavioral and biological biometric traits in multimodal biometric authentication systems under adversarial attacks. We assess the security of different fusion levels by employing the Fast Gradient Sign Method (FGSM). Also, our study contributes to the field by evaluating the extent of perturbations needed to generate effective adversarial attacks and identifying the fusion level that offers the highest security under different perturbations. Our experiments thoroughly analyze multimodal fusion levels using attack success rate, accuracy, precision, and recall under clean and adversarial data conditions. According to our results, the input fusion level offers the most secure level among the three fusion levels. In various adversarial attack settings, it demonstrates an average attack success rate of 16.62% on DenseNet201 and 32.30% on ArcFace architectures. This research presents an important analysis to support further investigations into the security of multimodal biometric systems and building defense methods for such systems.
Keywords