Mathematical Biosciences and Engineering (Feb 2024)

SASEGAN-TCN: Speech enhancement algorithm based on self-attention generative adversarial network and temporal convolutional network

  • Rongchuang Lv,
  • Niansheng Chen,
  • Songlin Cheng,
  • Guangyu Fan ,
  • Lei Rao,
  • Xiaoyong Song,
  • Wenjing Lv ,
  • Dingyu Yang

DOI
https://doi.org/10.3934/mbe.2024172
Journal volume & issue
Vol. 21, no. 3
pp. 3860 – 3875

Abstract

Read online

Traditional unsupervised speech enhancement models often have problems such as non-aggregation of input feature information, which will introduce additional noise during training, thereby reducing the quality of the speech signal. In order to solve the above problems, this paper analyzed the impact of problems such as non-aggregation of input speech feature information on its performance. Moreover, this article introduced a temporal convolutional neural network and proposed a SASEGAN-TCN speech enhancement model, which captured local features information and aggregated global feature information to improve model effect and training stability. The simulation experiment results showed that the model can achieve 2.1636 and 92.78% in perceptual evaluation of speech quality (PESQ) score and short-time objective intelligibility (STOI) on the Valentini dataset, and can accordingly reach 1.8077 and 83.54% on the THCHS30 dataset. In addition, this article used the enhanced speech data for the acoustic model to verify the recognition accuracy. The speech recognition error rate was reduced by 17.4%, which was a significant improvement compared to the baseline model experimental results.

Keywords