Purpose: To evaluate the performance of a convolution neural network-based algorithm for automatic segmentation of the whole heart from non-contrast radiotherapy planning CTs with image artifacts
Methods: We obtained non-contrast planning CTs with manual contours of the whole heart of 100 breast cancer patients in the Radiotherapy Comparative Effectiveness (RadComp) clinical trial. We employed a convolutional neural network, U-net, for the automatic segmentation of the whole heart. A total of 66 patients without image artifacts were used for training and 34 patients with image artifacts were used for testing the performance of the segmentation method. At least a single CT slice contained image artifacts: left breast (n=4), right breast (n=6), both breasts (n=15), heart (n=2), and other regions (n=7). The results were then compared with manual delineations to evaluate the performance of our method in terms of geometric similarities using dice similarity coefficients (DSC).
Results: The initial training set ran 400 epochs, which stabilized in approximately 46 hours. The DSCs for the image sets with artifacts in left breast, right breast, both breasts, heart, and other regions were 0.90 ± 0.03, 0.90 ± 0.04, 0.91 ± 0.04, 0.91 ± 0.01, and 0.79 ± 0.15, respectively. The lowest DSCs were observed in large patients extending outside the field of view.
Conclusion: When compared to DSC results from literature based on convolutional neural network, we found that image artifacts substantially reduce the performance of automatic segmentations. We plan to evaluate the dosimetric impact of the image artifacts.
Funding Support, Disclosures, and Conflict of Interest: This work was funded in part by the intramural program of the National Institutes of Health, National Cancer Institute, Division of Cancer Epidemiology and Genetics.