Scientific Reports (Jun 2022)

Unsupervised-learning-based method for chest MRI–CT transformation using structure constrained unsupervised generative attention networks

  • Hidetoshi Matsuo,
  • Mizuho Nishio,
  • Munenobu Nogami,
  • Feibi Zeng,
  • Takako Kurimoto,
  • Sandeep Kaushik,
  • Florian Wiesinger,
  • Atsushi K. Kono,
  • Takamichi Murakami

DOI
https://doi.org/10.1038/s41598-022-14677-x
Journal volume & issue
Vol. 12, no. 1
pp. 1 – 15

Abstract

Read online

Abstract The integrated positron emission tomography/magnetic resonance imaging (PET/MRI) scanner simultaneously acquires metabolic information via PET and morphological information using MRI. However, attenuation correction, which is necessary for quantitative PET evaluation, is difficult as it requires the generation of attenuation-correction maps from MRI, which has no direct relationship with the gamma-ray attenuation information. MRI-based bone tissue segmentation is potentially available for attenuation correction in relatively rigid and fixed organs such as the head and pelvis regions. However, this is challenging for the chest region because of respiratory and cardiac motions in the chest, its anatomically complicated structure, and the thin bone cortex. We propose a new method using unsupervised generative attentional networks with adaptive layer-instance normalisation for image-to-image translation (U-GAT-IT), which specialised in unpaired image transformation based on attention maps for image transformation. We added the modality-independent neighbourhood descriptor (MIND) to the loss of U-GAT-IT to guarantee anatomical consistency in the image transformation between different domains. Our proposed method obtained a synthesised computed tomography of the chest. Experimental results showed that our method outperforms current approaches. The study findings suggest the possibility of synthesising clinically acceptable computed tomography images from chest MRI with minimal changes in anatomical structures without human annotation.