APL Machine Learning (Dec 2023)

Attention hybrid variational net for accelerated MRI reconstruction

  • Guoyao Shen,
  • Boran Hao,
  • Mengyu Li,
  • Chad W. Farris,
  • Ioannis Ch. Paschalidis,
  • Stephan W. Anderson,
  • Xin Zhang

DOI
https://doi.org/10.1063/5.0165485
Journal volume & issue
Vol. 1, no. 4
pp. 046116 – 046116-8

Abstract

Read online

The application of compressed sensing (CS)-enabled data reconstruction for accelerating magnetic resonance imaging (MRI) remains a challenging problem. This is due to the fact that the information lost in k-space from the acceleration mask makes it difficult to reconstruct an image similar to the quality of a fully sampled image. Multiple deep learning-based structures have been proposed for MRI reconstruction using CS, in both the k-space and image domains, and using unrolled optimization methods. However, the drawback of these structures is that they are not fully utilizing the information from both domains (k-space and image). Herein, we propose a deep learning-based attention hybrid variational network that performs learning in both the k-space and image domains. We evaluate our method on a well-known open-source MRI dataset (652 brain cases and 1172 knee cases) and a clinical MRI dataset of 243 patients diagnosed with strokes from our institution to demonstrate the performance of our network. Our model achieves an overall peak signal-to-noise ratio/structural similarity of 40.92 ± 0.29/0.9577 ± 0.0025 (fourfold) and 37.03 ± 0.25/0.9365 ± 0.0029 (eightfold) for the brain dataset, 31.09 ± 0.25/0.6901 ± 0.0094 (fourfold) and 29.49 ± 0.22/0.6197 ± 0.0106 (eightfold) for the knee dataset, and 36.32 ± 0.16/0.9199 ± 0.0029 (20-fold) and 33.70 ± 0.15/0.8882 ± 0.0035 (30-fold) for the stroke dataset. In addition to quantitative evaluation, we undertook a blinded comparison of image quality across networks performed by a subspecialty trained radiologist. Overall, we demonstrate that our network achieves a superior performance among others under multiple reconstruction tasks.