Purpose: Despite the success of using deep learning for CT reconstruction, deep-learning-based cone-beam CT (CBCT) reconstruction remains challenging due to GPU memory limitations. This study is to develop a novel deep-learning (DL) technique to perform CBCT reconstruction with significantly reduced computer memory requirement and demonstrate its feasibility through CBCT reconstruction with sparsely sampled projection data.
Methods: The novel geometry-guided deep-learning (GDL) technique contains a GDL reconstruction module and a DL post-processing module. GDL replaces the traditional single fully connected layer with many small fully connected layers in the network architecture based on the projection geometry. GDL reconstruction module learns and performs projection-to-image domain transformation. The DL post-processing module further improves image quality after reconstruction. We demonstrated the feasibility and advantages of the model by comparing ground truth CT volumes with CBCT images reconstructed from simulated digitally-reconstructed-radiographs (DRRs) using 1) GDL reconstruction module only, 2) GDL reconstruction module with deep-learning post-processing module, 3) Feldkamp, Davis, and Kress (FDK) only, and 4) FDK with deep-learning post-processing module. The differences are quantified by peak-signal-to-noise ratio (PSNR), structural similarity (SSIM) index, and root-mean-square error (RMSE).
Results: Average values for PSNR, SSIM, and RMSE of reconstructed images are 14.092, 0.857, and 0.198 for FDK only, 18.086, 0.894, and 0.125 for GDL only, 23.015, 0.901, and 0.071 for FDK with post-processing, and 25.313, 0.930, and 0.054 for GDL with post-processing. Reconstruction time for all reconstruction methods is comparable (around seconds). The estimated memory requirement using GDL is reduced for more than 10,000-fold compared to DL methods using large fully connected layers for CBCT reconstruction.
Conclusion: The GDL technique was demonstrated to be able to perform fast and accurate CBCT image reconstruction from sparsely sampled data with significantly reduced memory requirement compared to other existing networks.
Funding Support, Disclosures, and Conflict of Interest: This work is partially supported by Duke University Chancellor Scholarship and is partially supported by NIH 1R01EB028324-01 and R01CA184173.