Applied Sciences (Oct 2024)
Development of a Deep Learning Model for Predicting Speech Audiometry Using Pure-Tone Audiometry Data
Abstract
Speech audiometry is a vital tool in assessing an individual’s ability to perceive and comprehend speech, traditionally requiring specialized testing that can be time-consuming and resource -intensive. This paper approaches a novel use of deep learning to predict speech audiometry using pure-tone audiometry (PTA) data. By utilizing PTA data, which measure hearing sensitivity at specific frequencies, we aim to develop a model that can bypass the need for direct speech testing. This study investigates two neural network architectures: a multi-layer perceptron (MLP) and a one-dimensional convolutional neural network (1D-CNN). These models are trained to predict key speech audiometry outcomes, including speech recognition thresholds and speech discrimination scores. To evaluate the effectiveness of these models, we employed two key performance metrics: the coefficient of determination (R2) and mean absolute error (MAE). The MLP model demonstrated predictive solid power with an R2 score of 88.79% and an average MAE of 7.26, while the 1D-CNN model achieved a slightly higher level of accuracy with an MAE score of 88.35% and an MAE of 6.90. The superior performance of the 1D-CNN model suggests that it captures relevant features from PTA data more effectively than the MLP. These results show that both models hold promise for predicting speech audiometry, potentially simplifying the audiological evaluation process. This approach is applied in clinical settings for hearing loss assessment, the selection of hearing aids, and the development of personalized auditory rehabilitation programs.
Keywords