Click here to

Session: Data Science Autoplanning and Autosegmentation [Return to Session]

Developing a Head-And-Neck CBCT Segmentation Network From Unlabeled Data Via Domain Adaptation and Self-Training

T Mengke, X Liang, H Morgan, H Shao, S Jiang, Y Zhang*, UT Southwestern Medical Center, Dallas, TX


SU-H430-IePD-F5-4 (Sunday, 7/10/2022) 4:30 PM - 5:00 PM [Eastern Time (GMT-4)]

Exhibit Hall | Forum 5

Purpose: Deep learning-based automatic segmentation is challenging in head-and-neck (HN) CBCTs, which often lack high-quality segmentation labels due to costs of manual segmentation, limited soft-tissue contrast, and inferior image quality. We developed an unsupervised learning-based HN-CBCT segmentation technique, via domain adaptation from CT and self-training.

Methods: To allow cross-domain adaptation, we trained a Cycle-GAN model between HN-CT and HN-CBCT images, and used it to convert HN-CTs (with segmentation labels) to synthesized CBCT (sCBCT) images. Using the sCBCTs and the known segmentation labels from the original HN-CTs, we trained a multi-class HN-CBCT segmentation network (Seg-sCBCT). To fine-tune Seg-sCBCT via self-training, we used Seg-sCBCT to infer labels on the real un-labeled CBCT set. The real CBCT set, along with the inferred labels by Seg-sCBCT, were combined with the sCBCT set with the original CT labels, to self-train a final HN-CBCT segmentation network (Seg-CBCT-ST). For evaluation, we used an in-house dataset of 28 HN patients with both pre-treatment CT and intra-treatment CBCT images, and trained Cycle-GAN, Seg-sCBCT, and Seg-CBCT-ST progressively. For comparison, we also trained a segmentation network directly on HN-CTs (Seg-CT). We compared Seg-CT, Seg-sCBCT and Seg-CBCT-ST on an independent set of 9 HN-CBCTs (with physician-labelled segmentations as ‘gold-standard’).

Results: Seg-CBCT-ST achieved superior segmentation accuracy on the unseen HN-CBCTs, compared to both Seg-CT and Seg-sCBCT. For esophagus, the corresponding average(±s.d.) DICE similarity coefficients were 0.334±0.169, 0.389±0.184 and 0.430±0.163 for Seg-CT, Seg-sCBCT and Seg-CBCT-ST, respectively. For larynx, the corresponding DICEs were 0.610±0.049, 0.610±0.031 and 0.638±0.039. For other structures including brainstem, mandible and spinal cord, Seg-CBCT-ST consistently out-performed the other models while with smaller margins.

Conclusion: Through a domain-adaptation and self-training strategy, we have successfully adapted CTs and their segmentations to un-labelled CBCTs. The pipeline can be applied to generate un-supervised CBCT segmentation models, to potentially improve the workflow of online adaptive radiotherapy.

Funding Support, Disclosures, and Conflict of Interest: The research was supported by grants from the National Institutes of Health (R01CA258987, R01CA240808)


Cone-beam CT, Segmentation


IM/TH- Image Segmentation Techniques: Machine Learning

Contact Email