Purpose: To clinically evaluate the accuracy of commercially available Deep-Learning-Based Auto-Segmentation (DLAS) software for auto-segmentation of organs at risk (OARs) in the head and neck (H&N) region on computed tomography (CT) images.
Methods: 22 organs at risk (brainstem, chiasm, L/R optic nerves, L/R eye, L/R lens, L/R cochlea, L/R temporal lobe, mandible, oral cavity, L/R parotid glands, L/R submandibular glands, constrictor muscle, esophagus, larynx, and spinal cord) in 40 H&N planning CT images were firstly segmented by skilled clinicians as the ground truth. Next, auto-segmentation was conducted through AccuContour (Version 3.0, Manteia) using the vendor-provided H&N model. The DLAS was compared to the ground truth contours and assessed using the Dice similarity coefficient (DSC) and mean distance metrics. The DSC was used as a statistic metric to gauge the similarity of contours generated by two segmentation methods. The mean distance is an additional metric to illustrate the similarity of two contours in 3D space.
Results: The average time to auto-segment all 22 structures was 1 min/patient. In addition to the high efficiency, the average DSC for each structure was grouped into three categories: Great (0.7≤DSC≤1.0), Fair (0.5≤DSC<0.7), and Poor (0
Conclusion: AccuContour DLAS of computed tomography images proved to be comparable to expert-created H&N OARs contours and can be a valuable tool in efficiently delineating normal tissues with high quality. Furthermore, this vendor-provided DLAS greatly reduces the time required to train models, segment each structure, and eliminate inter and intra-reader variability in real practice.