Click here to

Session: [Return to Session]

Intentional Deep Overfit Learning (IDOL): A Novel Deep Learning Strategy for Adaptive Radiation Therapy

J Chun1*, J Park2*, S Olberg2,3, Y Zhang2, D Nguyen2, J Wang2, J Kim1**, S Jiang2, (1) Department of Radiation Oncology, Yonsei University College of Medicine, Seoul, KR, (2) Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, (3) Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO

Presentations

MO-EF-TRACK 4-5 (Monday, 7/26/2021) 3:30 PM - 5:30 PM [Eastern Time (GMT-4)]

Purpose: Applications of deep learning (DL) are promising to realize an effective adaptive radiotherapy (ART) workflow. Despite the promise demonstrated by DL approaches in several critical ART tasks, there remain unsolved challenges to achieve satisfactory generalizability for a trained model in a clinical setting. Foremost among these is the difficulty of collecting a task-specific training dataset with high quality and consistent annotations. In this study, we propose a tailored DL framework for patient-specific performance that leverages the behavior of a model intentionally overfit to a patient's available prior information – an approach we term Intentional Deep Overfit Learning (IDOL).

Methods: Implementing the IDOL framework in any task in radiotherapy consists of two training stages: 1) training a generalized model with a diverse set of n patients, just as in the conventional DL approach, and 2) intentionally overfitting this general model to the patient of interest (n+1) using perturbations and augmentations of the available task- and patient-specific prior information to establish a personalized IDOL model. The IDOL framework itself is task-agnostic and is thus widely applicable to many components of the ART workflow, three of which we explore here.

Results: In the replanning CT auto-contouring task, the Dice similarity coefficient for the parotid glands improves from 0.865 with the general model to 0.939 by adopting the IDOL model. In the case of MRI super-resolution, the mean absolute error (MAE) is improved by 28% using the IDOL framework over the conventional model. Finally, in the synthetic CT reconstruction task, the MAE is reduced from 68 to 22 HU by utilizing the IDOL framework.

Conclusion: In this study, we demonstrate the wide applicability of the IDOL framework to common ART tasks. We expect the IDOL framework to be especially useful in creating personally tailored models in situations with limited availability of training data.

Handouts

    Keywords

    Not Applicable / None Entered.

    Taxonomy

    Not Applicable / None Entered.

    Contact Email

    Share: