Bioengineering (Jan 2023)

Myocardial Segmentation of Tagged Magnetic Resonance Images with Transfer Learning Using Generative Cine-To-Tagged Dataset Transformation

  • Arnaud P. Dhaene,
  • Michael Loecher,
  • Alexander J. Wilson,
  • Daniel B. Ennis

DOI
https://doi.org/10.3390/bioengineering10020166
Journal volume & issue
Vol. 10, no. 2
p. 166

Abstract

Read online

The use of deep learning (DL) segmentation in cardiac MRI has the potential to streamline the radiology workflow, particularly for the measurement of myocardial strain. Recent efforts in DL motion tracking models have drastically reduced the time needed to measure the heart’s displacement field and the subsequent myocardial strain estimation. However, the selection of initial myocardial reference points is not automated and still requires manual input from domain experts. Segmentation of the myocardium is a key step for initializing reference points. While high-performing myocardial segmentation models exist for cine images, this is not the case for tagged images. In this work, we developed and compared two novel DL models (nnU-net and Segmentation ResNet VAE) for the segmentation of myocardium from tagged CMR images. We implemented two methods to transform cardiac cine images into tagged images, allowing us to leverage large public annotated cine datasets. The cine-to-tagged methods included (i) a novel physics-driven transformation model, and (ii) a generative adversarial network (GAN) style transfer model. We show that pretrained models perform better (+2.8 Dice coefficient percentage points) and converge faster (6×) than models trained from scratch. The best-performing method relies on a pretraining with an unpaired, unlabeled, and structure-preserving generative model trained to transform cine images into their tagged-appearing equivalents. Our state-of-the-art myocardium segmentation network reached a Dice coefficient of 0.828 and 95th percentile Hausdorff distance of 4.745 mm on a held-out test set. This performance is comparable to existing state-of-the-art segmentation networks for cine images.

Keywords