Frontiers in Neuroscience (Feb 2025)

EEG analysis of speaking and quiet states during different emotional music stimuli

  • Xianwei Lin,
  • Xinyue Wu,
  • Zefeng Wang,
  • Zhengting Cai,
  • Zihan Zhang,
  • Guangdong Xie,
  • Lianxin Hu,
  • Laurent Peyrodie

DOI
https://doi.org/10.3389/fnins.2025.1461654
Journal volume & issue
Vol. 19

Abstract

Read online

IntroductionMusic has a profound impact on human emotions, capable of eliciting a wide range of emotional responses, a phenomenon that has been effectively harnessed in the field of music therapy. Given the close relationship between music and language, researchers have begun to explore how music influences brain activity and cognitive processes by integrating artificial intelligence with advancements in neuroscience.MethodsIn this study, a total of 120 subjects were recruited, all of whom were students aged between 19 and 26 years. Each subject is required to listen to six 1-minute music segments expressing different emotions and speak at the 40-second mark. In terms of constructing the classification model, this study compares the classification performance of deep neural networks with other machine learning algorithms.ResultsThe differences in EEG signals between different emotions during speech are more pronounced compared to those in a quiet state. In the classification of EEG signals for speaking and quiet states, using deep neural network algorithms can achieve accuracies of 95.84% and 96.55%, respectively.DiscussionUnder the stimulation of music with different emotions, there are certain differences in EEG between speaking and resting states. In the construction of EEG classification models, the classification performance of deep neural network algorithms is superior to other machine learning algorithms.

Keywords