Sensors (May 2019)

Deep CT to MR Synthesis Using Paired and Unpaired Data

  • Cheng-Bin Jin,
  • Hakil Kim,
  • Mingjie Liu,
  • Wonmo Jung,
  • Seongsu Joo,
  • Eunsik Park,
  • Young Saem Ahn,
  • In Ho Han,
  • Jae Il Lee,
  • Xuenan Cui

DOI
https://doi.org/10.3390/s19102361
Journal volume & issue
Vol. 19, no. 10
p. 2361

Abstract

Read online

Magnetic resonance (MR) imaging plays a highly important role in radiotherapy treatment planning for the segmentation of tumor volumes and organs. However, the use of MR is limited, owing to its high cost and the increased use of metal implants for patients. This study is aimed towards patients who are contraindicated owing to claustrophobia and cardiac pacemakers, and many scenarios in which only computed tomography (CT) images are available, such as emergencies, situations lacking an MR scanner, and situations in which the cost of obtaining an MR scan is prohibitive. From medical practice, our approach can be adopted as a screening method by radiologists to observe abnormal anatomical lesions in certain diseases that are difficult to diagnose by CT. The proposed approach can estimate an MR image based on a CT image using paired and unpaired training data. In contrast to existing synthetic methods for medical imaging, which depend on sparse pairwise-aligned data or plentiful unpaired data, the proposed approach alleviates the rigid registration of paired training, and overcomes the context-misalignment problem of unpaired training. A generative adversarial network was trained to transform two-dimensional (2D) brain CT image slices into 2D brain MR image slices, combining the adversarial, dual cycle-consistent, and voxel-wise losses. Qualitative and quantitative comparisons against independent paired and unpaired training methods demonstrated the superiority of our approach.

Keywords