IEEE Access (Jan 2024)
Direct Filter Learning From Iterative Reconstructed Images for High-Quality Analytical CBCT Reconstruction Using FDK-Based Neural Network
Abstract
Purpose: We propose an FDK-based neural network to directly learn the filter from an iterative reconstruction (IR) algorithm and apply the learned filter in the FDK algorithm to obtain a high-quality CBCT reconstruction. Methods: The FDK algorithm is transformed into a linear expression of several matrix multipliers and embedded into neural network layers. Then, the FDK-based neural network framework is built including two fundamental modules and four core network layers. This network model can learn a filter directly from the iteratively reconstructed CBCT images by cascading the network layers of cosine weighting, filtering, backprojection, and leaky rectified linear unit and setting filter as the only trainable parameter. Preliminary and simulation studies performed on abdominal CT datasets are conducted to explore the correctness and effectiveness of the learned filter. Then, the head and neck CT data and Catphan phantom are utilized to demonstrate the generalization performance of the learned filter. Results: Preliminary study shows that the learned filter is consistent with the target filter, and the mean absolute difference is around 0.001. Compared with conventional FDK, the FDK-based neural network shows a better image quality with the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) increasing by 67% and 6%, respectively. In terms of the line-pair slice in the Catphan phantom, the SSIM and PSNR are improved by 13.75% and 42.78%, respectively. Conclusions: The FDK-based neural network can reconstruct high-quality images by directly learning a filter from the label images and provides a new perspective on solving the time-consuming problem of IR methods.
Keywords