Purpose: 4D-CBCT is valuable for 4D-target verification in radiotherapy. However, due to the scanning time and imaging dose constraints, projections acquired for each respiratory phase are intrinsically under-sampled, leading to streak artifacts in FDK reconstruction or blurriness in iterative reconstruction using compressed sensing (CS). Introducing the average 4D-image constraint to the CS-based reconstruction, such as prior-image-constrained CS (PICCS), can improve the edge sharpness of the static structures. However, PICCS can lead to motion artifacts in the moving regions. In this study, we proposed a dual-encoder convolutional neural network (DeCNN) for fast and high-quality average image-constrained 4D-CBCT reconstruction.
Methods: The proposed DeCNN has two parallel encoders to extract features from the average images and the under-sampled target phase images, respectively. The two encoders have the same architecture but do not share weights. Average image features and under-sampled image features are extracted at multi-scale levels and are then concatenated and fed into the decoder to reconstruct the high-quality images of the target phase. DeCNN was trained on 28 lung 4D-CBCT and tested on 12 lung 4D-CBCT data. Its results were both qualitatively and quantitatively evaluated using RMSE, PSNR, and SSIM, and compared against other state-of-the-art methods.
Results: 4D-CBCT reconstructed by DeCNN showed: (1) qualitatively, clear and accurate edges for both stable and moving structures; (2) quantitatively, low RMSE, high PSNR, and high SSIM compared to the ground-truth images; and (3) superior quality to those reconstructed by other methods, including back-projection, CS total-variation, PICCS, and the single-encoder CNN. DeCNN took only about 1.8 seconds to reconstruct images of one respiratory phase.
Conclusion: The proposed dual-encoder architecture demonstrated effectiveness in improving the 4D-CBCT image quality with average image constraint, thus improving 4D-CBCT’s clinical utility for the moving target localization.
Funding Support, Disclosures, and Conflict of Interest: This work was supported by the National Institutes of Health under Grant No. R01-CA184173 and R01-EB028324.