Click here to

Session: Machine Intelligence in Image Processing and Motion Correction I [Return to Session]

Longitudinal Unsupervised Deformable Image Registration Network for Adaptive Radiotherapy

D Lee*, Y Hu, S Alam, J Jiang, L Cervino, P Zhang, Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY

Presentations

TH-E-BRC-3 (Thursday, 7/14/2022) 1:00 PM - 2:00 PM [Eastern Time (GMT-4)]

Ballroom C

Purpose: To simultaneously register all the longitudinal images acquired in a radiotherapy course for analyzing patients’ anatomy changes and accumulating dose for adaptive radiotherapy (ART).

Methods: To address the unique needs of ART, we designed Seq2Morph, a novel deep learning-based deformable image registration (DIR) network based on a standard registration network, VoxelMorph (Balakrishnan, 2019). The major upgrades are 1) expansion of inputs to all weekly CBCTs acquired for monitoring treatment responses throughout a radiotherapy course, for registration to their planning CT; 2) incorporation of 3D convolutional long short-term memory between the encoder and decoder of VoxelMorph, to parse the temporal patterns of anatomical changes; and 3) addition of bidirectional pathways to calculate and minimize inverse consistency errors. Longitudinal image sets from 50 patients, including a planning CT and six weekly CBCTs per patient were utilized for the network training and cross-validation. The outputs were all the registration pair deformation vector fields. The loss function was composed of a normalized cross-correlation for image intensity similarity, a DICE for contour similarity, and a regularization term for smoothness. For performance evaluation, DICE and Hausdorff distance (HD) for the manual vs. predicted contours of tumor and esophagus on weekly basis were quantified and further compared with other state-of-the-art algorithms, including conventional VoxelMorph and Large Deformation Diffeomorphic Metric Mapping (LDDMM).

Results: Visualization of the hidden states of Seq2Morph revealed distinct spatiotemporal anatomy change patterns. Quantitatively, Seq2Morph performed similarly to LDDMM, but significantly outperformed VoxelMorph as measured by GTV DICE: (0.80±0.07, 0.80±0.08, 0.77±0.08), and 95% HD (mm): (4.1±1.7, 4.2±2.1, 4.4±1.9). The per-patient inference of Seq2Morph took <1min, much less than LDDMM (~30 min).

Conclusion: Seq2Morph can provide accurate and fast DIR for longitudinal image studies by exploiting spatial-temporal patterns. It can be easily integrated into a clinical workflow and serve for both online and offline ART.

Keywords

Cone-beam CT, Deformation, Registration

Taxonomy

IM/TH- Image Registration Techniques: Machine Learning

Contact Email

Share: