Alexandria Engineering Journal (Dec 2023)

Heart sound classification based on bispectrum features and Vision Transformer mode

  • Zeye Liu,
  • Hong Jiang,
  • Fengwen Zhang,
  • Wenbin Ouyang,
  • Xiaofei Li,
  • Xiangbin Pan

Journal volume & issue
Vol. 85
pp. 49 – 59

Abstract

Read online

In regions with limited resources and moderate incomes, the relentless spectre of cardiovascular diseases (CVDs) continues to loom large. Amidst this challenge, the precise classification of heart sounds is emerging as a pivotal linchpin in the realm of early CVD diagnosis and intervention. Manual heart sound auscultation efficacy remains tethered to the expertise of physicians, but the tides are shifting. With deep learning algorithms, heart sound classification reaches new heights. In this paradigm-shifting article, we unveil an ingenious model fortified by bispectrum-inspired feature extraction and the cutting-edge prowess of the Vision Transformer (ViT) model. This model spearheads the binary classification of heart sounds, labelling them as either 'normal' or 'abnormal.' Our model uses data from the PhysioNet Challenge 2022 database, which contains 3163 data points from 942 patients. The model showcases an adept classification process with a remarkable consistency, notably holding its own when distinguishing between heart sounds of pregnant and nonpregnant patients. Moreover, we dare to challenge the status quo. This article boldly pits the performance of our model against that of seasoned cardiologists. Our model emerges as the triumphant frontrunner, eclipsing the proficiency of even the most seasoned cardiologists. In a world where health resources are unevenly distributed, this pioneering work offers a beacon of hope, unlocking a future where advanced algorithms not only match but surpass human expertise. Step into the forefront of transformative cardiac care—where artificial intelligence becomes the key to unravelling the secrets of heart sounds.

Keywords