Scientific Reports (Feb 2024)

Reducing image artifacts in sparse projection CT using conditional generative adversarial networks

  • Keisuke Usui,
  • Sae Kamiyama,
  • Akihiro Arita,
  • Koichi Ogawa,
  • Hajime Sakamoto,
  • Yasuaki Sakano,
  • Shinsuke Kyogoku,
  • Hiroyuki Daida

DOI
https://doi.org/10.1038/s41598-024-54649-x
Journal volume & issue
Vol. 14, no. 1
pp. 1 – 12

Abstract

Read online

Abstract Reducing the amount of projection data in computed tomography (CT), specifically sparse-view CT, can reduce exposure dose; however, image artifacts can occur. We quantitatively evaluated the effects of conditional generative adversarial networks (CGAN) on image quality restoration for sparse-view CT using simulated sparse projection images and compared them with autoencoder (AE) and U-Net models. The AE, U-Net, and CGAN models were trained using pairs of artifacts and original images; 90% of patient cases were used for training and the remaining for evaluation. Restoration of CT values was evaluated using mean error (ME) and mean absolute error (MAE). The image quality was evaluated using structural image similarity (SSIM) and peak signal-to-noise ratio (PSNR). Image quality improved in all sparse projection data; however, slight deformation in tumor and spine regions was observed, with a dispersed projection of over 5°. Some hallucination regions were observed in the CGAN results. Image resolution decreased, and blurring occurred in AE and U-Net; therefore, large deviations in ME and MAE were observed in lung and air regions, and the SSIM and PSNR results were degraded. The CGAN model achieved accurate CT value restoration and improved SSIM and PSNR compared to AE and U-Net models.

Keywords