Click here to

Session: AI-Based Auto-Segmentation and Auto-Contouring - II [Return to Session]

Improving Cone-Beam CT Based Organ Segmentation with Attention and Knowledge Transfer

H Zhou1*, Y Min2, A Kishan2, S Yoon2, M Cao2, D Ruan12, (1) Department of Bioengineering, UCLA, Los Angeles, CA, (2) Department of Radiation Oncology, UCLA, Los Angeles, CA


TU-F-TRACK 6-5 (Tuesday, 7/27/2021) 4:30 PM - 5:30 PM [Eastern Time (GMT-4)]

Purpose: It is desirable to utilize Cone-beam CT (CBCT) for on-table patient anatomy monitoring to perform daily dose accumulation and even plan adaptation. However, CBCT suffers from low image quality, resulting in difficulty to perform segmentation, either manually or automatically; and are usually associated with a high level of uncertainty and inconsistency. In this study, we aim to improve CBCT based segmentation by proposing a deep learning approach with common domain embedding to transfer planning quality image inference and contour confidence to CBCT-based segmentation.

Methods: A YOLO detector trained from multiple views was first applied to localize the region of segmentation interests. Using 2.5D UNet as the basic segmentation structure, two approaches injected FBCT into the segmentation module were explored: (1) a brute-force approach where training data contained images and the corresponding contours from both modalities, entering the network randomly; (2) a two-stream model with partially coupled encoder and early inference, where CBCT went through one stream, and a randomized subset of CBCT and FBCT went through another stream. The proposed approach was applied to the prostate region, where rectum and bladder were segmented from the background.

Results: The two-stream model enhanced the segmentation accuracy of the rectum/bladder to DSC 0.68±0.09/0.92±0.02 for testing subject1, and 0.79±0.04/0.87±0.01 for testing subject2; and MSD 3.67±1.23/1.77±0.51mm and 3.24±1.62/3.75±0.64mm; compared to the baseline UNet trained with CBCT which had DSC 0.66±0.07/0.90±0.03 and 0.76±0.04/0.77±0.03; 3.47±0.58/2.20±0.88mm and MSD 2.80±0.54/5.43±0.81mm; and with CBCT+FBCT of DSC 0.67±0.08/0.91±0.02 and0.78±0.04/0.86±0.01; MSD 3.51±0.85/1.96±0.64mm and 3.59±0.97/4.55±0.64mm, respectively.

Conclusion: The proposed YOLO + two-stream model effectively enhanced the segmentation performance in CBCT by regulating its features extraction and early inference with FBCT. We will adjust the coupling to allow for modality-specific feature variation and also regulate the output contour. Moreover, we are actively investigating approaches to minimize uncertainty associated with the CBCT label themselves.



    Cone-beam CT, Contour Extraction


    IM- Cone Beam CT: Machine learning, computer vision

    Contact Email