Purpose: The accurate segmentation of organs-at-risk (OARs) in CT images is a critical step for radiation therapy of head and neck cancer patients. However, manual delineation for numerous OARs in this region is time-consuming and challenging due to low soft-tissue contrast. We propose a novel dual shape guided network (DSGnet) to delineate nine important OARs in CT images automatically.
Methods: To deal with the large shape variation and unclear boundary of OARs in head and neck CT images, we represent the organ shape using an organ-specific unilateral inverse-distance map (UIDM) and guide the segmentation task from two different perspectives: direct shape guidance by following the segmentation prediction and across shape guidance by sharing the segmentation feature. In direct shape guidance, the segmentation prediction is supervised by the true label mask and the true UIDM, implemented through an encoder-decoder mapping from the label space to the distance space. In the across shape guidance, UIDM is used to facilitate the segmentation by optimizing the shared feature maps. Our method was applied to 699 images. The Dice Similarity Coefficient (DSC) and Average Surface Distance (ASD) of the segmentation result were compared with another seven published start-of-art deep learning methods. The time spent in revising automatically generated contours by our algorithm was compared with the time for the complete manual contouring.
Results: Our model showed the overall DSC value of 0.842 and ASD value of 1.204 mm across nine OARs. Compared with competing methods, our method achieved the best DSC for all nine OARs and the best ASD for eight OARs. Manual delineation time is saved 64% when aided by the contours created from our algorithm.
Conclusion: We have developed a novel dual shape guided fully convolutional network for the accurate segmentation of organs-at-risk in head and neck CT images.
Funding Support, Disclosures, and Conflict of Interest: SW and JL are in part supported by NIH 1R01CA206100.