IEEE Access (Jan 2024)
Enhancing Security: Infused Hybrid Vision Transformer for Signature Verification
Abstract
Handwritten signature verification is challenging because there is a huge variation between the orientation thickness and appearance of handwritten signatures. A strong signature verification system is essential to refine the accuracy of confirming user authentication. This investigation introduces an inclusive framework for training and evaluating hybrid vision transformer models on diverse signature datasets, aiming to refine the accuracy in confirming user authentication. In previous studies, transformer & MobileNet were used for computer vision classification and signature verification separately. Drawing inspiration from the Convolutional Neural Network (CNN), the hybrid model is proposed as a deep-learning model (ResNet-18 & MobileNetV2) with the Vision Transformer model (proposed method 1 & proposed method 2).To bring originality to this study, we excluded the final layer of the feature extractor and smoothly integrated it with the initial layer of the vision transformer. In the scope of this research, we introduced a unique hybrid vision transformer model. Furthermore, we incorporated swish and tangent hyperbolic (tanh) activation functions into the validation model to enhance its performance. Experimental results showcase the effectiveness of the proposed hybrid model, achieving notable accuracies on various datasets, including 92.33% accuracy on Bhsig-Bengali, 99.89% accuracy on Bhsig-Hindi, 99.96% accuracy on Cedar, and 74.09% accuracy on UTsig-Persian datasets, respectively. The practical implications of this research extend to real-time signature verification for secure and efficient user authentication, particularly in mobile applications. This advancement in signature verification technology presents new possibilities for practical use in diverse scenarios beyond academia.
Keywords