Journal of Informatics and Web Engineering (Oct 2024)
Sibling Discrimination Using Linear Fusion on Deep Learning Face Recognition Models
Abstract
Facial recognition technology has revolutionised human identification, providing a non-invasive alternative to traditional biometric methods like signatures and voice recognition. The integration of deep learning has significantly enhanced the accuracy and adaptability of these systems, now widely used in criminal identification, access control, and security. Initial research focused on recognising full-frontal facial features, but recent advancements have tackled the challenge of identifying partially visible faces, a scenario that often reduces recognition accuracy. This study aims to identify siblings based on facial features, particularly in cases where only partial features like eyes, nose, or mouth are visible. Utilising advanced deep learning models such as VGG19, VGG16, VGGFace, and FaceNet, the research introduces a framework to differentiate between sibling images effectively. To boost discrimination accuracy, the framework employs a linear fusion technique that merges insights from all the models used. The methodology involves preprocessing image pairs, extracting embeddings with pre-trained models, and integrating information through linear fusion. Evaluation metrics, including confusion matrix analysis, assess the framework's robustness and precision. Custom datasets of cropped sibling facial areas form the experimental basis, testing the models under various conditions like different facial poses and cropped regions. Model selection emphasises accuracy and extensive training on large datasets to ensure reliable performance in distinguishing subtle facial differences. Experimental results show that combining multiple models' outputs using linear fusion improves the accuracy and realism of sibling discrimination based on facial features. Findings indicate a minimum accuracy of 96% across different facial regions. Although this is slightly lower than the accuracy achieved by a single model like VGG16 with full-frontal poses, the fusion approach provides a more realistic outcome by incorporating insights from all four models. This underscores the potential of advanced deep learning techniques in enhancing facial recognition systems for practical applications.
Keywords