Purpose: To propose a proof-of-concept solution to real-time virtual visualization of target-in-motion based on in-treatment optical surface signals and pre-treatment images. A deep learning model was developed and validated to map body surface displacement to internal anatomy deformation.
Methods: Body contours of 4DCT were segmented to simulate optical surface. Free-form deformable-image-registration (DIR) algorithm with isotropic variation regularization was used to register the end-of-exhalation (EOE) phase to other phases. Dimension of deformation vector field was reduced to the first two principal components (PC). A simple neural network composed of convolutional and fully connection layers was trained to predict the two PC scores of each phase from surface displacement with respect to EOE surface. The instant deformation field was reconstructed and used to warp EOE images to obtain real-time CT images. The approach was tested on three schemes: (1) Different respirations of 4D-XCAT digital phantom mimicking pre-treatment (training) and in-treatment motions respectively. (2-3) 4DCT images with annotated landmarks (5 patients) and with physician-delineated tumor contours (5 patients) from public DIR-Lab and TCIA 4D-Lung dataset respectively. The EOE-to-EOI (end-of-inhalation) pairs were excluded in training but used for validation.
Results: Due to the relatively low complexity of respiration motion, the highest validation accuracy of tumor centroid trajectory (0.04mm±0.02mm) was observed on XCAT phantom. For the 300 landmarks annotated on EOI images of DIR-Lab, the mean displacements between their predicted and reference positions were 1.05mm, 0.88mm, 1.18mm, 1.72mm and 1.53mm for the 5 patients respectively. The DICE-coefficients between predicted and reference tumor contours at EOI phase were 0.92, 0.84, 0.93, 0.85 and 0.71 respectively for the 5 patients from TCIA 4D-Lung dataset.
Conclusion: The preliminary experiment displayed close relationship between surface geometry and internal anatomy, demonstrating the potential of using deep learning model to synthesize real-time 4D images out of in-treatment dose-free surface signals.
Funding Support, Disclosures, and Conflict of Interest: The authors thank Jing Cai, W. Paul Segars, Hao Wu, Chenguang Li and Hongjia Liu for their generous help. This study was supported by National Natural Science Foundation of China 11505012; Science Foundation of Peking University Cancer Hospital 2021-1; Peking University Health Science Center Medical Education Research Funding Project 2020YB34.
Optical Imaging, Image-guided Therapy, CT