Click here to

Session: Multi-Disciplinary General ePoster Viewing [Return to Session]

Feasibility of Federated Learning for Head and Neck (HN) Tumor Segmentation in PET/CT

Y Yuan*, S Adabi, Y Lo, J Junn, R Bakst, The Mount Sinai Hospital, New York, NY


PO-GePV-M-231 (Sunday, 7/25/2021)   [Eastern Time (GMT-4)]

Purpose: Generalization of deep-learning-based medical image segmentation strongly relies on the number and the diversity of training cases; however, it is practically challenging to aggregate data from different institutions due to various technical, legal, privacy, and data ownership barriers. This study aims to investigate the feasibility of employing federated learning to collaboratively train an auto-segmentation model for HN tumor segmentation in PET/CT from multi-institutional cases without data sharing.

Methods: We used 200 training cases from the HECKTOR challenge in MICCAI 2020. These cases were collected from four institutions, including 55 from CHGJ, 18 from CHMR, 55 from CHUM and 72 cases from CHUS. We randomly selected 20% cases from each institution to form a testing dataset of 40 cases, and the rest of them were used for training. The segmentation was based on scale-attention-network (SA-Net) that we recently developed and achieved the 4th place in the HECKTOR challenge. We implemented FedAvg where the weights from each local model were aggregated in the server to form a global model that was re-distributed to each client for further training. The performance of the FedAvg model was compared with the baseline models that were trained exclusively on the local data within each institution.

Results: The average DSC on the testing dataset was 0.716, 0.434, 0.680 and 0.738 when applying the baseline model from CHGJ, CHMR, CHUM and CHUS respectively. The FedAvg model yielded an average DSC of 0.752, which achieved 97% performance of the pooled model (DSC: 0.775) that was trained by pooling all the data across institutions.

Conclusion: Our experiments show that the segmentation performance of the baseline models was restricted by the limited local data. Despite the significant imbalance in numbers of cases per institution, federated learning has potential to train a better model across collaborating institutions without sharing their data.

Funding Support, Disclosures, and Conflict of Interest: This work is partially supported by a research grant from Varian Medical Systems (Palo Alto, CA, USA) and grant UL1TR001433 from the National Center for Advancing Translational Sciences, National Institutes of Health, USA



    Not Applicable / None Entered.


    Not Applicable / None Entered.

    Contact Email