IEEE Open Journal of Signal Processing (Jan 2024)
JEP-KD: Joint-Embedding Predictive Architecture Based Knowledge Distillation for Visual Speech Recognition
Abstract
Visual Speech Recognition (VSR) tasks are generally recognized to have a lower theoretical performance ceiling than Automatic Speech Recognition (ASR), owing to the inherent limitations of conveying semantic information visually. To mitigate this challenge, this paper introduces an advanced knowledge distillation approach using a Joint-Embedding Predictive Architecture (JEPA), JEP-KD, designed to utilize audio features more effectively during model training. Central to JEP-KD is including a generative network within the embedding layer in the knowledge distillation structure, which enhances the video encoder's capacity for semantic feature extraction and brings it closer to the audio features from a pre-trained ASR model's encoder. This approach aims to reduce the performance gap between VSR and ASR progressively. Moreover, a comprehensive multimodal, multistage training regimen for the JEP-KD framework is established, bolstering the robustness and efficacy of the training process. Experiment results demonstrate that JEP-KD significantly improves the performance of VSR models and demonstrates versatility across different VSR platforms, indicating its potential for broader application within other multimodal tasks.
Keywords