Purpose: Deep-learning-based CT reconstruction models can generate high-resolution images, but they are susceptible to instability and network bias. This work studies how to remove the artifacts and recover fine details from the outputs of an existing deep-learning-based sparse-view CT reconstruction algorithm with limited training data.
Methods: The CT scans of multiple anatomic sites, including head, chest, and abdomen from 50 patients are used in this study. The full-dose images are used as the ground truth for evaluation. CT measurements (sinograms) are simulated with parallel-beam geometry using 25 projection angles distributed evenly across 180 degrees with an additional independent Gaussian noise. A deep-learning-based sparse-view reconstruction model was trained on data from 40 patients. The data from the remaining 10 patients were used for evaluation. We proposed a novel corrector algorithm (Residual-NeRP Ensemble) to correct the outputs from an existing deep learning model. The trained deep-learning model did the first-step reconstruction for those patients. Then, the corrector algorithms were applied to the outputs from the deep-learning model to obtain the final reconstruction image.
Results: Residual-NeRP Ensemble, PICCS, and ADMM-CNN are able to improve the reconstruction quality of the given deep-learning model. Residual-NeRP ensemble gives the greatest improvement in terms of PSNR and SSIM on all anatomic sites of the 10 patients in the evaluation set with an average gain of 3.22db in PSNR. Residual-NeRP ensemble is able to recover the fine details and remove the artifacts from the given deep-learning reconstruction.
Conclusion: This work indicates the potential of removing bias and recovering fine details from a deep-learning-based CT reconstruction algorithm. The corrector algorithms look promising in improving the reconstruction quality of deep-learning-based models. This work can be extended to 3D cone-beam CT reconstruction and accelerated MRI reconstruction in future investigations.