IEEE Access (Jan 2022)
Automated Heart Valve Disorder Detection Based on PDF Modeling of Formant Variation Pattern in PCG Signal
Abstract
Heart valve disorder (HVD) analysis from heart sound is being well known for a long period of time, and use of digital stethoscope gives opportunity to diagnose HVDs from phonocardiographic (PCG) signal. An automated HVD detection technique from PCG signal can play a key role as a first-hand diagnostic tool for the physicians. In this paper, in order to classify different HVDs, we propose to utilize the formant characteristic of the PCG signal, which is an acoustic property of the heart sound. PCG signals exhibit significant variations depending on different types of HVDs and thus conventional time frequency domain features or statistical features are extracted from PCG signal for disease classification. However, direct PCG signals are also used in sequential networks to classify HVDs. Similar to the formant peaks of voiced speech signal, the spectrum corresponding to the PCG signal exhibits distinguishable peaks, especially in the voiced part of the heart sound (lub-dub). Keeping this notable key point in consideration, Burg’s autoregressive model is used to find the parametric spectrum of the PCG signal. The first two formants of the PCG signal, that carry the most informative acoustic properties of the heart sound, are estimated from the Burg’s spectrum, and are used for feature extraction. The magnitude, frequency and phase of each formant are considered to evaluate these features. Instead of considering a long duration of PCG signal at a time, we consider the overlapping sub-frames, and extract formants from each sub-frame, which generates a temporal variation of the formants. Finally, we propose a PDF model fitting of the formant variation, and utilize the estimated model parameters along with some statistical features to classify the HVDs. Two famous publicly available PCG datasets are used to demonstrate the performance of the proposed method, that efficiently classify the binary/five classes of heart sounds. The results reveal that the proposed method has the overall accuracy values of 93.46% and 99.28% for the two datasets, which is better in comparison to other previously reported state-of-the-art techniques.
Keywords