IEEE Transactions on Neural Systems and Rehabilitation Engineering (Jan 2022)

Automated Dysarthria Severity Classification: A Study on Acoustic Features and Deep Learning Techniques

  • Amlu Anna Joshy,
  • Rajeev Rajan

DOI
https://doi.org/10.1109/TNSRE.2022.3169814
Journal volume & issue
Vol. 30
pp. 1147 – 1157

Abstract

Read online

Assessing the severity level of dysarthria can provide an insight into the patient’s improvement, assist pathologists to plan therapy, and aid automatic dysarthric speech recognition systems. In this article, we present a comparative study on the classification of dysarthria severity levels using different deep learning techniques and acoustic features. First, we evaluate the basic architectural choices such as deep neural network (DNN), convolutional neural network, gated recurrent units and long short-term memory network using the basic speech features, namely, Mel-frequency cepstral coefficients (MFCCs) and constant-Q cepstral coefficients. Next, speech-disorder specific features computed from prosody, articulation, phonation and glottal functioning are evaluated on DNN models. Finally, we explore the utility of low-dimensional feature representation using subspace modeling to give i-vectors, which are then classified using DNN models. Evaluation is done using the standard UA-Speech and TORGO databases. By giving an accuracy of 93.97% under the speaker-dependent scenario and 49.22% under the speaker-independent scenario for the UA-Speech database, the DNN classifier using MFCC-based i-vectors outperforms other systems.

Keywords