Applied Sciences (Mar 2021)

A Speech Command Control-Based Recognition System for Dysarthric Patients Based on Deep Learning Technology

  • Yu-Yi Lin,
  • Wei-Zhong Zheng,
  • Wei Chung Chu,
  • Ji-Yan Han,
  • Ying-Hsiu Hung,
  • Guan-Min Ho,
  • Chia-Yuan Chang,
  • Ying-Hui Lai

DOI
https://doi.org/10.3390/app11062477
Journal volume & issue
Vol. 11, no. 6
p. 2477

Abstract

Read online

Voice control is an important way of controlling mobile devices; however, using it remains a challenge for dysarthric patients. Currently, there are many approaches, such as automatic speech recognition (ASR) systems, being used to help dysarthric patients control mobile devices. However, the large computation power requirement for the ASR system increases implementation costs. To alleviate this problem, this study proposed a convolution neural network (CNN) with a phonetic posteriorgram (PPG) speech feature system to recognize speech commands, called CNN–PPG; meanwhile, the CNN model with Mel-frequency cepstral coefficient (CNN–MFCC model) and ASR-based systems were used for comparison. The experiment results show that the CNN–PPG system provided 93.49% accuracy, better than the CNN–MFCC (65.67%) and ASR-based systems (89.59%). Additionally, the CNN–PPG used a smaller model size comprising only 54% parameter numbers compared with the ASR-based system; hence, the proposed system could reduce implementation costs for users. These findings suggest that the CNN–PPG system could augment a communication device to help dysarthric patients control the mobile device via speech commands in the future.

Keywords