Click here to

Session: Multi-Disciplinary: Segmentation II [Return to Session]

Saliency-Guided Deep Learning Network for Automatic Gross Target Volume Delineation in Partial Breast Irradiation

M Kazemimoghadam*, W Chi, A Rahimi, N Kim, P Alluri, C Nwachukwu, X Gu, W Lu, UT Southwestern Medical Center, Dallas, TX

Presentations

TU-IePD-TRACK 4-7 (Tuesday, 7/27/2021) 3:00 PM - 3:30 PM [Eastern Time (GMT-4)]

Purpose: To accommodate an efficient scan-plan-treat clinical workflow in partial breast irradiation (PBI) such as GammaPod, fast, accurate and automated target delineation is desired. In this study, we develop a saliency-based deep learning segmentation (SDL-Seg) algorithm for automatic gross tumor volume (GTV) delineation in post-op breast irradiation.

Methods: Our approach incorporates saliency information into a U-Net model for target delineation. The visual saliency maps were generated using surgical clips as markers identified on CT images, and a distance-transformation coupled with a Gaussian filter were adopted to convert markers’ locations to probability maps. The CT images and the corresponding probability maps form a two-channel input for the segmentation network. The dataset used for model training, validation and testing (19:5:5) is comprised of 145 prone CT images from 29 post-operative breast cancer patients, who had surgical clips and received 5-fraction PBI regimen on GammaPod. We used the Dice similarity coefficient (DSC), 95 percentile Hausdorff distance (HD95) and average symmetric surface distance (ASD) to assess segmentation results using SDL-Seg and compared it with those generated with basic U-Net.

Results: Our model achieves mean (standard deviation) of 76.4(2.7) %, 6.76(1.83)mm, and 1.9(0.66)mm for DSC, HD95, and ASD respectively outperforming basic U-Net by 13.8%, 1.63mm, and 0.9mm. For all 5 testing cases, the saliency-guided U-Net showed increased DSC compared to basic U-Net. The average DSC for each case using SDL-Seg are 74.3%±4.8%, 79.35±3.7%, 75.6±1.7%, 73.6%±4.4% and 79.3%±1.5%, while using basic U-Net the outcomes are: 70.8.6±2.1%, 57.8%±21.1%, 63.2%±4.3%, 51.9%±9.5% and 69.2±3.7%.

Conclusion: We developed a deep learning model integrating visual saliency information into the network for GTV delineation. Results demonstrate that SDL-Seg outperforms basic U-Net and is a promising approach for improving the efficiency and accuracy of the on-line treatment planning procedure of PBI, such as GammaPod.

ePosters

    Keywords

    Not Applicable / None Entered.

    Taxonomy

    Not Applicable / None Entered.

    Contact Email

    Share: