Purpose: The movement of tissues caused by respiration seriously restricts the treatment accuracy. It is desired to obtain 3D image and tumor position in real-time during treatment for motion control and dose evaluation. To address this, a novel 3D CT reconstruction algorithm based on single X-ray image at any gantry angle was proposed in this work.
Methods: An end-to-end deep convolutional neural network (DCNN) was proposed to generate 3D image and tumor segmentation mask taking a single X-ray image as input. The network contains three parts including feature extraction module, 2D-3D transformation module and two decoding modules. One decoding module was for reconstruction task, the other for segmentation. A novel skip-connection module was designed to shuttle low-level features from encoder to decoder, bridging 2D and 3D maps more naturally. The method was preliminarily validated on three lung patient cases. For each case, based on PCA respiratory motion model, 1080 CT images with varying tissue motion were synthesized with tumor mask. The network was trained using 972 CT images and tested on the remaining CT images from one patient case, taking the DRRs generated at random angle as input.
Results: It took about 4h for the network training to converge, while only 35ms to reconstruct the CT image and segment the tumor from an X-ray image. For the 3D reconstruction task, the PSNR and SSIM for the three patients were (38.6 dB, 35.6 dB and 36.5 dB) and (0.983, 0.969 and 0.975) respectively. For the tumor segmentation, the Dice coefficients were 0.744, 0.807 and 0.98, with a positive correlation with tumor size (1.6 cc, 5.8 cc and 66.9 cc).
Conclusion: It is promising to simultaneously reconstruct 3D image and segment the 3D contours of the tumor from a single X-ray image through the powerful modeling ability of the DCNN.
Funding Support, Disclosures, and Conflict of Interest: This work was supported by the National Key R&D Program of China under Grant No. 2018YFA0704100 and 2018YFA0704101.