Applied Mathematics and Nonlinear Sciences (Jan 2024)
A Comparative Study of Teaching Effectiveness in Emotionally Empowered Music Classrooms from a Multimodal Perspective
Abstract
In this paper, the librosa library is used to calculate the mean and variance of spectral sentiment features as audio modal sentiment features. Subsequently, the modal sentiment features of the lyrics can be obtained by characterizing the lyrics text using the Doc2Vec algorithm, which maps the text from natural language to mathematical vector form. The audio modal affective features are taken as the main modality, while the lyrics modal affective features are taken as the target modality, and the multimodal affective features are fused using EncoderDecoder. According to the multimodal theory, a music teaching model that integrates multimodal emotional features is designed, and the effect of this teaching model is analyzed. The accuracy of music emotion extraction of this paper’s model is 7.05% higher than SVM, 3.97% higher than CNN, and 0.95% higher than HMM, and this paper’s model performs better than the control model in Precision, Recall, and F1. In addition, the control group and the experimental group have significant differences in music beat imitation ability, the ability to listen to music and count the beats, and the ability to imitate movement rhythms, and their specific P-values are 0.004, 0.012 and 0.037, respectively. Optimizing the organization of music teaching and innovating the teaching mode through multimodal affective features further promote the change in music classroom teaching.
Keywords