Purpose: Due to the inter- and intra-fraction variations of respiratory motion, real-time volumetric imaging is highly desirable during lung SBRT delivery. Real-time volumetric imaging could improve active motion management and better account for motion interplay effects in the reconstruction of delivered dose. We recently developed a novel deep learning-based method to derive 3D CT images from a single 2D projection image and validated its efficacy via CT simulation studies. The purpose of this study is to investigate clinical feasibility using actual patient CBCT data and associated 2D projection images from lung SBRT treatments.
Methods: The training and the inference steps for a new case follow a feed-forward path in our proposed TransNet network. TransNet is trained by several supervision mechanisms that optimize the learnable parameters to derive 2D-to-3D transformations. A single 2D projection image is input to generate a synthetic 3D CT. The training dataset is comprised of phase-binned 3D CT images and their corresponding 2D projection images from patient 4D CT studies. A 3D CT of the same respiratory phase as a 2D projection image is used as the ground truth to supervise TransNet. During inference, the trained TransNet extracts features from a patient’s 2D CBCT projection image and derives a synthetic 3D CBCT. Our method was validated using clinical images from the treatments of 45 lung SBRT patients.
Results: The mean absolute error (MAE), peak-signal-to-noise-ratio (PSNR) and normalized cross-correlation (NCC) indexes between the synthetic CBCT and original CBCT images were 134.8 ± 30.8 HU, 18.8 ± 1.3 dB, and 0.93 ± 0.02, respectively.
Conclusion: We have evaluated the effectiveness of our recently developed deep learning-based method to derive instantaneous synthetic CBCT images from single 2D projection images using real patient data. This study shows the potential to achieve real-time volumetric imaging during SBRT delivery to ensure treatment accuracy.