Click here to

Session: Imaging General ePoster Viewing [Return to Session]

Standardizing Radiotherapy Structure Names with Multimodal Data: Deep Learning Approach

P Bose1*, S Srinivasan2, P Turner3, W Sleeman4, J Palta5, R Kapoor6, P Ghosh7, (1) Virginia Commonwealth University, Richmond, VA, (2) Virginia Commonwealth University, ,,(3) ,,,(4) Virginia Commonwealth University, Richmond, VA, (5) Virginia Commonwealth University, Richmond, VA, (6) VCU Health System, Richmond, VA, (7) Virginia Commonwealth University, Richmond,

Presentations

PO-GePV-I-65 (Sunday, 7/10/2022)   [Eastern Time (GMT-4)]

ePoster Forums

Purpose: With the advent of big data research supported by AI models, there's a growing need to utilize consistent and standard nomenclature for targets and organs-at-risk (OARs). The AAPM TG-263 group has provided this standardized nomenclature for structure names. We leverage the non-standard names, 3D imaging, and dose information from retrospective DICOM-RT datasets in a CNN-based Deep Learning model that automatically rename them to standard names.

Methods: We used 9,750 structures from 550 prostate patients from 41 VA sites to automatically classify the following structures: Bladder, Rectum, Left Femur, Right Femur, Small Bowel, Large Bowel, PTV, and all other structures. The 3D bitmap structures, imaging and dose data were extracted by centering a bounding box on each structure. The random names were tokenized with BioBERT; the embedding vector was calculated based on our corpus. We built four 3D-CNN models with data inputs from structure, structure plus image, structure plus dose and structure plus image plus dose. Another 1D-CNN model was built for learning textual features and the output was concatenated with the output feature layer of each 3D-CNN model. Finally, the model was completed with a dense layer to make the predictions.

Results: Our results showed that the macro-averaged F1-scores for the combination of structure, image, and dose data were 0.779, 0.745, 0.748, and 0.734, respectively. With 1D-CNN model on non-standard names, the model’s overall performance improved with F1-scores of 0.934, 0.951, 0.924, and 0.949, respectively.

Conclusion: CNN models can learn radiomic features from 3D bitmap structures; however, the use of dose and imaging data did not improve model performance in prior works. With the unavailability of large amounts of annotated datasets to effectively train these 3D-CNN models, we utilized model training from additional textual features to improve the model performance.

Funding Support, Disclosures, and Conflict of Interest: National Radiation Oncology Program- Veteran's Affairs (NROP-VHA) has funded the work.

Keywords

Image Fusion, Image Analysis, Image-guided Therapy

Taxonomy

IM- Multi-Modality Imaging Systems: Other

Contact Email

Share: