ePoster Forums
Purpose: AAPM TG-263 has provided standardized nomenclature guidelines for target, normal tissue structures, and treatment planning concepts and metrics. These recommendations can be applied on prospective treatment planning (DICOM-RT) datasets. One challenge is to relabel all of the retrospective datasets in our treatment planning systems to this standardized nomenclature. Here, we leverage a Federated Learning (FL) model where data is distributed across each center and trained individually. Compared to traditional centralized machine learning (ML) techniques requiring all data to be stored on a single server, FL reduces privacy concerns by keeping the raw data at the local healthcare facility.
Methods: The dataset from 40 VA centers with 794 prostate cancer patients was used. Manual relabeling was performed for nine routinely contoured prostate structures with TG-263 standard names. We utilized the random forest algorithm, and the data was split across 40 centers (horizontally partitioned). We used component-wise parameter averaging as this will guarantee convergence, this averaging method will weight based on the proportion of data points contributed by each center.
Results: The best macro average F1-score, precision, and recall are 0.870, 0.930, 0.901. We also compared this model with traditional ML implementation for the same task and we observed FL model accuracy drops by only 7.3%.
Conclusion: Medical information sharing has huge ethical, privacy and information security issues thereby limiting centralized Artificial Intelligence (AI)-based technologies. Our proposed FL model enables the cross-organization learning of structure names without direct access to data and has the potential to be the next-general AI model training framework offering privacy protection. Future research needs to identify the right mix of FL models (with lower accuracy and higher security) and traditional ML models (vice versa) depending on the clinical context and data availability.
Not Applicable / None Entered.
Not Applicable / None Entered.