Ballroom C
Purpose: Deep learning techniques have been developed to reconstruct CT images from sinograms with encouraging results. However, these techniques rely on large fully connected (FC) layers to perform projection-to-image domain transformation, creating large resource-consuming models that potentially exceed a computer's computational memory limit. Based on the CT projection geometry, this study proposes a novel geometry-guided multi-beamlet deep learning (GMDL) technique that constructs models with significantly reduced size and GPU memory consumption. The study compares the proposed technique with the FC layer-based deep learning (FCDL) method on CT images reconstructed from low-dose real patient data.
Methods: Instead of using large FC layers in which most network connections are less relevant, the proposed GMDL technique learns the domain transformation by constructing many small FC layers based on the projection geometry. These small FC layers connect a pixel in the projection domain to one central and many peripheral beamlets along the projection ray in the image domain. To evaluate the performance of the GMDL technique, we compared ground truth full-dose CT images with low-dose CT images reconstructed with the GMDL, FCDL, and conventional FBP methods. The reconstructed images were quantitatively analyzed with metrics of peak-signal-to-noise-ratio (PSNR), structural-similarity-index-measure (SSIM), and root-mean-square-error (RMSE).
Results: The optimal configuration of the beamlet number was two peripheral beamlets on each side of a central beamlet. The CT images reconstructed with the GMDL technique showed improved image quality compared to images reconstructed by FCDL and FBP methods in terms of PSNR, SSIM, and RMSE. With the GMDL technique, the deep learning model size and GPU memory consumption were reduced to less than 1/100 of the FCDL model.
Conclusion: Compared to FCDL, the proposed GMDL technique was demonstrated to be able to reconstruct low-dose CT images with improved image quality and significantly reduced model size and consumption of computational resources.
Funding Support, Disclosures, and Conflict of Interest: This work was partially supported by Duke University Chancellor Scholarship, the National Institutes of Health under Grant No. R01-EB028324, R01-EB001838, R01-CA226899, and a research grant from Varian Medical Systems.