npj Digital Medicine (Jun 2023)

A foundational vision transformer improves diagnostic performance for electrocardiograms

  • Akhil Vaid,
  • Joy Jiang,
  • Ashwin Sawant,
  • Stamatios Lerakis,
  • Edgar Argulian,
  • Yuri Ahuja,
  • Joshua Lampert,
  • Alexander Charney,
  • Hayit Greenspan,
  • Jagat Narula,
  • Benjamin Glicksberg,
  • Girish N Nadkarni

DOI
https://doi.org/10.1038/s41746-023-00840-9
Journal volume & issue
Vol. 6, no. 1
pp. 1 – 8

Abstract

Read online

Abstract The electrocardiogram (ECG) is a ubiquitous diagnostic modality. Convolutional neural networks (CNNs) applied towards ECG analysis require large sample sizes, and transfer learning approaches for biomedical problems may result in suboptimal performance when pre-training is done on natural images. We leveraged masked image modeling to create a vision-based transformer model, HeartBEiT, for electrocardiogram waveform analysis. We pre-trained this model on 8.5 million ECGs and then compared performance vs. standard CNN architectures for diagnosis of hypertrophic cardiomyopathy, low left ventricular ejection fraction and ST elevation myocardial infarction using differing training sample sizes and independent validation datasets. We find that HeartBEiT has significantly higher performance at lower sample sizes compared to other models. We also find that HeartBEiT improves explainability of diagnosis by highlighting biologically relevant regions of the EKG vs. standard CNNs. Domain specific pre-trained transformer models may exceed the classification performance of models trained on natural images especially in very low data regimes. The combination of the architecture and such pre-training allows for more accurate, granular explainability of model predictions.