Click here to

Session: Novel Strategies Using Existing Imaging Technology for Planning, Delivery and Toxicity Analyses [Return to Session]

Predicting Locoregional Recurrence Through Multi-Modality and Multi-View Deep Learning for in Head & Neck Cancer

J Guo1*, R Wang2, Z Zhou3, K Wang4, R Xu5, J Wang6, (1) Xidian University, ,,(2) Xidian University, ,,CN, (3) University of Central Missouri, Warrensburg, MISSOURI, (4) UT Southwestern Medical Center, Dallas, TX, (5) Putian Unviersity, ,,(6) UT Southwestern Medical Center, Dallas, TX

Presentations

TH-F-TRACK 4-7 (Thursday, 7/29/2021) 4:30 PM - 5:30 PM [Eastern Time (GMT-4)]

Purpose: Locoregional recurrence (LRR) remains one of leading causes in head and neck (H&N) cancer treatment failure despite the advancement of multidisciplinary management. Accurately predicting LRR in early stage can make an optimal personalized treatment plan. We aim to develop a multi-modality and multi-view convolutional neural network (mMmV-CNN) for LRR prediction.

Methods: Totally 206 patients are used in this study, which includes the image and clinical parameters with H&N cancer received radiation treatment from September 2005 to November 2015. In mMmV-CNN, a dimension reduction operator is developed by rotating the 3D CT and PET data along the vertical axis and the average projection along horizontal axis is calculated to obtain 2D image in different directions to make fully use of the information from 3D data. The multi-view strategy is then adopted to improve the performance of 2D CNN for extracting image feature by aggregating features extracted from different views. As such, the number of network parameters is greatly reduced, and the spatial 3D context information can be effectively used as well. Furthermore, we design a multi-modality deep neural network which can be trained in an end-to-end manner and jointly optimize the deep features of CT, PET and clinical parameters.

Results: Five-fold cross-validation is performed in this study. The area under the curve (AUC), sensitivity, specificity and accuracy when combining CT, PET and clinical data can achieve 0.8052, 0.7347, 0.8089, 0.7913, respectively.

Conclusion: In this study, a new mMmV-CNN model was developed for accurately identifying H&N cancer patients at high-risk for LRR after definitive radiation or chemoradiation therapy. Compared with the single-view method, the multi-view method fully utilizes image data from different angles to obtain more discriminative information. Furthermore, the experimental results demonstrated that mMmV-CNN can obtain better performance by combining CT, PET and clinical features.

Handouts

    Keywords

    Not Applicable / None Entered.

    Taxonomy

    Not Applicable / None Entered.

    Contact Email

    Share: