Scientific Reports (Dec 2021)

VSUGAN unify voice style based on spectrogram and generated adversarial networks

  • Tongjie Ouyang,
  • Zhijun Yang,
  • Huilong Xie,
  • Tianlin Hu,
  • Qingmei Liu

DOI
https://doi.org/10.1038/s41598-021-03770-2
Journal volume & issue
Vol. 11, no. 1
pp. 1 – 10

Abstract

Read online

Abstract In course recording, the audio recorded in different pickups and environments can be clearly distinguished and cause style differences after splicing, which influences the quality of recorded courses. A common way to improve the above situation is to use voice style unification. In the present study, we propose a voice style unification model based on generated adversarial networks (VSUGAN) to transfer voice style from the spectrogram. The VSUGAN synthesizes the audio by combining the style information from the audio style template and the voice information from the processed audio. And it allows the audio style unification in different environments without retraining the network for new speakers. Meanwhile, the current VSUGAN is implemented and evaluated on THCHS-30 and VCTK-Corpus corpora. The source code of VSUGAN is available at https://github.com/oy-tj/VSUGAN . In one word, it is demonstrated that the VSUGAN can effectively improve the quality of the recorded audio and reduce the style differences in kinds of environments.