IEEE Access (Jan 2022)

NAS-TasNet: Neural Architecture Search for Time-Domain Speech Separation

  • Joo-Hyun Lee,
  • Joon-Hyuk Chang,
  • Jae-Mo Yang,
  • Han-Gil Moon

DOI
https://doi.org/10.1109/ACCESS.2022.3176003
Journal volume & issue
Vol. 10
pp. 56031 – 56043

Abstract

Read online

The fully convolutional time-domain speech separation network (Conv-TasNet) has been used as a backbone model in various studies because of its structural excellence. To maximize the performance and efficiency of Conv-TasNet, we attempt to apply a neural architecture search (NAS). NAS is a branch of automated machine learning that automatically searches for an optimal model structure while minimizing human intervention. In this study, we introduce a candidate operation to define the search space of NAS for Conv-TasNet. In addition, we introduce a low computational cost NAS to overcome the limitations of the backbone model that consumes large GPU memory for training. Next, we determine the optimized separation module structures using two search strategies based on gradient descent and reinforcement learning. In addition, when NAS is simply applied, there is an imbalance in the updating of architecture parameters, which are NAS parameters. Therefore, we introduce an auxiliary loss method that is appropriate for the Conv-TasNet architecture for a balanced architecture parameter update of the entire model. Furthermore, we determine that the auxiliary loss technique mitigates the imbalance of architecture parameter updates and improves the separation accuracy.

Keywords