EJNMMI Physics (Aug 2025)

Development and validation of 3D super-resolution convolutional neural network for 18F-FDG-PET images

  • Hiroki Endo,
  • Kenji Hirata,
  • Keiichi Magota,
  • Takaaki Yoshimura,
  • Chietsugu Katoh,
  • Kohsuke Kudo

DOI
https://doi.org/10.1186/s40658-025-00791-y
Journal volume & issue
Vol. 12, no. 1
pp. 1 – 20

Abstract

Read online

Abstract Background Positron emission tomography (PET) is a valuable tool for cancer diagnosis but generally has a lower spatial resolution compared to computed tomography (CT) or magnetic resonance imaging (MRI). High-resolution PET scanners that use silicon photomultipliers and time-of-flight measurements are expensive. Therefore, cost-effective software-based super-resolution methods are required. This study proposes a novel approach for enhancing whole-body PET image resolution applying a 2.5-dimensional Super-Resolution Convolutional Neural Network (2.5D-SRCNN) combined with logarithmic transformation preprocessing. This method aims to improve image quality and maintain quantitative accuracy, particularly for standardized uptake value measurements, while addressing the challenges of providing a memory-efficient alternative to full three-dimensional processing and managing the wide dynamic range of tracer uptake in PET images. We analyzed data from 90 patients who underwent whole-body FDG-PET/CT examinations and reconstructed low-resolution slices with a voxel size of 4 × 4 × 4 mm and corresponding high-resolution (HR) slices with a voxel size of 2 × 2 × 2 mm. The proposed 2.5D-SRCNN model, based on the conventional 2D-SRCNN structure, incorporates information from adjacent slices to generate a high-resolution output. Logarithmic transformation of the voxel values was applied to manage the large dynamic range caused by physiological tracer accumulation in the bladder. Performance was assessed using the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). The quantitative accuracy of standardized uptake values (SUV) was validated using a phantom study. Results The results demonstrated that the 2.5D-SRCNN with logarithmic transformation significantly outperformed the conventional 2D-SRCNN in terms of PSNR and SSIM (p < 0.0001). The proposed method also showed an improved depiction of small spheres in the phantom while maintaining the accuracy of the SUV. Conclusions Our proposed method for whole-body PET images using a super-resolution model with the 2.5D approach and logarithmic transformation may be effective in generating super-resolution images with a lower spatial error and better quantitative accuracy.

Keywords