Purpose: Although there is an increased interest in Big Data for research and quality assurance in radiation oncology, the lack of use of standard nomenclature has hindered such progress. AAPM TG 263 has proposed a solution to standardize structure labels but clinical adoption for prospective or retrospective data has been sparse. Previous work in structure name standardization has included radiomic features, such as mean, kurtosis, skew, and uniformity, but the selection of features were done manually. Deep Learning has demonstrated the ability to learn such features from the data alone, and so we investigate how CNN based models can learn from imaging and dose datasets.
Methods: We used 9,750 structures from 550 prostate patients to automatically label the following structures: Bladder, Rectum, Left Femur, Right Femur, Small Bowel, Large Bowel, PTV, and a group containing all other structures. A bounding box was centered on each structure in the dataset and a bitmap was extracted. The same bounding box was used to extract the imaging and dose data. Four CNN models were tested: structure only, structure plus image, structure plus dose, structure plus image plus dose.
Results: Our results showed that the combination of structure, image, and dose data provided the best results with weighed F1-scores of 0.842, 0.791, 0.845, and 0.861 respectively. When tested on an external dataset of 50 patients from another institution, the F1-scores were 0.829, 0.801, 0.831, 0.857.
Conclusion: Adding dose information to the structure bitmap and imaging data can improve the ability to correctly label delineated structures. Based on their position relative to the target, each OAR or target may follow a unique pattern of dose delivery which may be contributing to the improved results. Although very large annotated radiotherapy datasets are not yet available, CNN models can still learn radiomic features without being manually defined.
Computer Vision, Prostate Therapy, Contour Extraction