Ballroom C
Purpose: To develop a motion modeling-based convolutional neural network (MM-CNN) to estimate real-time CBCTs using a single x-ray projection.
Methods: MM-CNN is based on the principal-component-analysis (PCA)-driven motion modeling, which solves deformation-vector-fields (DVFs) as linearly-weighted principal components (PCs) to deform a prior image volume into real-time CBCTs. The inputs to MM-CNN include the prior volume, the real-time cone-beam projection, and the PCs. To encode the x-ray projection angle into the input to train an angle-agnostic model, we incorporated a cone-beam projector into MM-CNN to generate a digitally-reconstructed-radiograph (DRR) of the prior volume at the same angle. The real-time projection and the DRR were subsequently back-projected into 3D volumes by a following back-projection layer. Both volumes, together with the prior volume and the PCs, were passed into subsequent MM-CNN blocks, to predict the PC weightings to construct final DVFs for CBCT generation. MM-CNN was trained unsupervised, with its loss measured directly between projected DRRs of predicted real-time CBCTs and the input real-time cone-beam projection. We used a 4D-CT dataset of 38 lung patients to train and evaluate MM-CNN (22/4/12 for training/validation/testing). For testing, the estimated real-time CBCTs were compared with the ‘ground-truth’ 4D-CTs. DVF errors were evaluated using manually-tracked lung landmarks. A conventional optimization-based PCA motion-modeling method was used for comparison.
Results: Tested using mixed 0⁰, 45⁰ and 90⁰ x-ray projection angles, MM-CNN yielded real-time CBCTs with an average(±S.D.) relative-errors (REs) of 8.44%(±1.72%) and average DVF errors of 2.70mm(±2.56mm), compared to 13.22%(±4.9%) and 6.65mm(±5.17mm) of the prior volume (before deformation), and 7.75 %(±1.44%) and 2.56mm(±2.20mm) of the conventional PCA-based motion-modeling algorithm. MM-CNN solved each DVF in ~200ms, compared to ~40s of the conventional algorithm.
Conclusion: MM-CNN provides a generalized model applicable to different patients and arbitrary x-ray angles. It makes an efficient tool for on-board real-time motion estimation for image-guided radiotherapy.
Funding Support, Disclosures, and Conflict of Interest: The research was supported by grants from the National Institutes of Health (R01CA258987, R01CA240808)
Not Applicable / None Entered.
Not Applicable / None Entered.