IEEE Access (Jan 2023)

The Analysis of Music Emotion and Visualization Fusing Long Short-Term Memory Networks Under the Internet of Things

  • Yujing Cao,
  • Jinwan Park

DOI
https://doi.org/10.1109/ACCESS.2023.3341926
Journal volume & issue
Vol. 11
pp. 141192 – 141204

Abstract

Read online

This paper aims to provide an automatic analysis of music emotion, enhancing users’ ability to perceive and comprehend the emotional nuances conveyed in music intuitively. It first reviews the limitations of traditional music emotion analysis methods, and highlights the transformative potential of Internet of Things (IoT) technology in addressing these limitations. Subsequently, the paper employs the Long Short-Term Memory (LSTM) network to model time series data in music, integrating it with the Sequence-to-Sequence (STS) framework to construct an advanced music emotion analysis model. Finally, several machine learning algorithms are adopted to train and evaluate the system. The findings underscore the efficacy of the music emotion analysis model based on the LSTM network fused with STS, demonstrating notable success in the task of music emotion prediction. Specifically, the fusion model achieves an average absolute error of Arousal value of 0.921, a root mean square error (RMSE) of 0.534, and an R square of 0.498. The average absolute error in the Valence value is 0.902, the RMSE is 0.575, and the R square is 0.478. This paper holds significant implications for a deeper understanding of music emotion and provides guidance for music recommendation and creation.

Keywords