Click here to

Session: Deep Learning for Image-guided Therapy [Return to Session]

Deformable CT Image Registration Using Unsupervised Deep Learning Networks

Y Lei, Y Fu, Z Tian, T Wang, J Zhang, X Dai, J Zhou, J Roper, M McDonald, D Yu, J Bradley, T Liu, X Yang*, Emory University, Atlanta, GA

Presentations

WE-C1030-IePD-F2-5 (Wednesday, 7/13/2022) 10:30 AM - 11:00 AM [Eastern Time (GMT-4)]

Exhibit Hall | Forum 2

Purpose: Quality assurance CT (QACT) scans are often required to assess patient anatomical changes and the resulting dosimetric variances during a course of head and neck (HN) radiotherapy. Currently, the dose assessment is time consuming and tedious. The purpose of this study is to develop a novel deep learning-based method to deformably register the planning CT (pCT) and QACTs to streamline the workflow for anatomical variation assessment and dose-volume-histogram (DVH) analysis.

Methods: The proposed CTReg network is trained by several supervision mechanisms that optimize the learnable parameters to estimate the deformation vector field (DVF) that register pCT and QACTs. The pCT and QACTs are used as input, and the CTReg network simultaneously derives two sets of DVFs that register the two images in both directions. A novel dual feasible loss was proposed to train the mutual network, which was used to provide additional DVF regularization to preserve the topology and reduce tissue folding. During inference, the trained CTReg network extracts features from a patient’s pCT and QACT images and derives DVFs to register these two images. Our method was validated using clinical images of 45 HN cancer patients (160 CTs in total), each with 1 pCT and 2-5 QACTs. The pCT and QACTs were rigidly registered prior to inference.

Results: Image co-registration was evaluated within the body contour before and after CTReg processing: mean absolute error (113.9 HU → 40.8 HU), peak signal-to-noise ratio (22.0 dB → 30.7 dB), normalized cross correlation (0.66 → 0.94) and target registration error (4.4 mm → 2.2 mm).

Conclusion: We have evaluated the effectiveness of our deep learning-based DIR method for registering the pCT and QACTs using patient data. The proposed DIR method has the potential to facilitate fast dosimetric evaluations on the QACT and support clinical plan adaptation decisions.

Keywords

Not Applicable / None Entered.

Taxonomy

IM/TH- Image Registration: CT

Contact Email

Share: