Purpose: Adaptive radiotherapy (ART) workflows have been increasingly adopted to achieve dose escalation and tissue sparing under shifting anatomic conditions, but the necessity of recontouring and the associated time burden hinders a real-time, online ART workflow. In response to this challenge, approaches to auto-segmentation involving deformable image registration, atlas-based segmentation, and deep learning-based segmentation (DLS) have been developed. Despite the particular promise shown by DLS methods, implementing these approaches in a clinical setting remains a challenge, namely due to the difficulty of curating a data set of sufficient size and quality so as to achieve generalizability in a trained model. In this study, we utilize a patient’s prior information to leverage the behavior of an intentionally overfit model with the application of the Intentional Deep Overfit Learning (IDOL) framework to the auto-segmentation task.
Methods: The intentionally overfit model is trained in two stages. In the first, a conventional, general model is trained with a diverse set of patient data (n = 80 patients) consisting of CT images and clinical contours. Following this, the same model is tuned with a data set consisting solely of deformations derived from data from the single patient of interest. The conventional model and proposed overfit models were evaluated using the Dice similarity coefficient computed for 2 structures in 20 test patients
Results: Adopting the IDOL framework improved segmentation performance for each patient. Dice scores improved from 0.85 with the conventional model to 0.92 with the patient-specific overfit model on average and by up to 14% in the best case.
Conclusion: The IDOL framework applied to the auto-segmentation task achieves improved performance compared to the conventional DLS approach, demonstrating the promise of leveraging patient-specific prior information in a task central to online ART workflows.