IEEE Access (Jan 2023)

GMCNet: A Generative Multi-Resolution Framework for Cardiac Registration

  • Ameneh Sheikhjafari,
  • Michelle Noga,
  • Ahmed Ahmed,
  • Nilanjan Ray,
  • Kumaradevan Punithakumar

DOI
https://doi.org/10.1109/ACCESS.2023.3238058
Journal volume & issue
Vol. 11
pp. 8185 – 8198

Abstract

Read online

Deformable image registration plays a crucial role in estimating cardiac deformation from a sequence of images. However, existing registration methods primarily process images as pairs instead of processing all images in a sequence together. This study proposes a novel end-to-end learning-free generative multi-resolution convolutional neural network (GMCNet) with the primary focus of registering images in a sequence. Even though learning-based methods have yielded high performance for image registration, their performance depends on their ability to learn information from a large number of samples which are difficult to obtain and might bias the framework to the specific domain of data. The proposed learning-free method eliminates the need for a dedicated training set while exploiting the capabilities of neural networks to achieve accurate deformation fields. Due to its capability of parameter sharing through the architecture, the GMCNet can be used as a groupwise registration as well as pairwise registration. The proposed method was evaluated on three different clinical cardiac magnetic resonance imaging datasets and compared quantitatively against nine other state-of-the-art learning and optimization-based algorithms. The proposed method outperformed other methods in all comparisons and yielded average Dice metric values ranging from 0.85 to 0.88 for the datasets. Different aspects of the GMCNet are also explored by assessing 1) the robustness; 2) performance on pairwise registration; 3) the influence of spatial transformation in a controlled environment; and 4) the impact of different multi-resolution structures. The results demonstrate that using temporal information to estimate the deformation fields leads to more accurate registration results and improved robustness under different noise levels. Moreover, the proposed method does not need images for training, and therefore, its prediction is not domain-specific and can be applied to any sequence of images.

Keywords