Purpose: Existing auto-segmentation has limited success for complex organs (e.g., bowels). Consequently, time-consuming slice-by-slice review and editing is often required. To address this issue, we propose a deep learning based, fast, semi-automatic contouring method that predicts a contour on a slice using the user-approved contour from the previous slice.
Methods: To demonstrate the idea, the deep learning method was trained and tested for stomach and small bowel contours on T2-weighted MRIs acquired from 48 patients with abdominal tumors. All the MRIs were pre-processed (e.g., bias corrected, normalized, and cropped) and carefully delineated (ground truth). The network backbone used was UNet with 3-channel input, including an image slice (to be segmented), and the image and contour of previous slice. Data augmentation by randomized 3D rotation, scaling and different image pair sampling using 1, 2, or 3 slices intervals was used in the model training. Patient-based leave-one-out-cross-validation was employed to evaluate the model performance. Quality of the predicted contour was evaluated quantitatively using Dice similarity coefficient (DSC) and qualitatively by a human rater with scores of 1, 2 and 3 indicting acceptable, minor editing, and major editing, respectively.
Results: The average 2D-DSC of the predicted contours for each patient was in the range of (0.82-0.96) /(0.74-0.93) with a median value of 0.90/0.88 for stomach/small bowel contours. The percentage of the predicated contours with scores of 1 to 3 was 82.4 ± 10.5%, 14.6± 8.9% and 3.1± 4.8% for stomach and 72.7± 19.0%, 23.4 ± 15.2% and 3.9 ± 5.7% for small bowel.
Conclusion: The proposed method can robustly predict a contour on an MRI slice based on the contour from previous slice for complex structures (e.g. stomach and small bowel). It can be integrated into existing workflows, facilitating fast and accurate segmentation, which is particularly important for MR-guided adaptive radiation therapy.
Funding Support, Disclosures, and Conflict of Interest: Funding support from NIH R01CA247960