IEEE Access (Jan 2024)
The Usage of Artificial Intelligence Technology in Music Education System Under Deep Learning
Abstract
This work addresses the needs of the music generation field by developing a music generation system based on an advanced Transformer model. The system incorporates an adaptive music feature encoder and a music emotion-driven multi-task learning framework. By integrating music theory knowledge and dynamic weight adjustment, the adaptive encoder can accurately capture different music styles and emotional characteristics. The multi-task learning framework enhances the model’s ability to generate music with emotional depth by using emotion tags. Experimental results show that the proposed model achieves significant performance improvements on the Lakh MIDI Dataset (LMD). This is specifically reflected in scores that are above the industry average: a Bilingual Evaluation Understudy (BLEU) score of 0.43, a Recall-Oriented Understudy for Gisting Evaluation (ROUGE-L) score of 0.63, and a Metric for Evaluation of Translation with Explicit ORdering (METEOR) score of 0.31. Additionally, teaching methods based on music generation models have shown significant advantages in enhancing students’ musical skills, emotional expression abilities, and overall satisfaction. These results not only demonstrate the effectiveness of the proposed method but also highlight its potential applications in music education and automated music composition.
Keywords