Click here to

Session: Imaging: Nuclear Medicine, PET [Return to Session]

Learning Feature Fusion for Multi-Modality PET-CT Segmentation with An Interpretation Method

Z Chen1, W Lu2, X Qi3, S Tan1*, (1) Huazhong University of Science and Technology, Wuhan, China (2) Memorial Sloan Kettering Cancer Center, Scarsdale, NY, USA (3) UCLA School of Medicine, Los Angeles, China

Presentations

WE-IePD-TRACK 2-2 (Wednesday, 7/28/2021) 12:30 PM - 1:00 PM [Eastern Time (GMT-4)]

Purpose: The performance of a multi-modality PET-CT segmentation method heavily relies on the quality of the used information fusion strategy. We utilize the interpretation methods as tools to quantitatively measure the importance of information from each modality. We propose a novel feature fusion strategy according to the measured importance of information and design an iterative algorithm to automatically learn how to fuse information from different modalities.

Methods: The co-segmentation framework consists of three stages. The first stage contains two parallel branches, and each is responsible for high-dimensional feature extraction either from PET or CT using an encoder-decoder style network. The second stage consists of an interpretable module, in which the perturbation-based interpretation method is used to measure the importance of different features. According to the measured importance of features, we obtain an effectively weighted features fusion strategy. The third stage is composed of cascaded convolutional blocks, which follow the interpretable module and are used to re-extract the semantic features and output the final segmentation mask. The algorithm is trained and tested on a PET-CT dataset with 84 patients, in which 60 are selected randomly for training, while the remaining 24 are used for test.

Results: We use W-Net as the baseline, which is a classic co-segmentation method. Compared to the metrics (DSC=0.82, VE=0.36, CE=0.17) of the baseline, our method (DSC=0.85, VE=0.30, CE=0.15) achieves a superior performance. Meanwhile, we decompose our framework to investigate the contribution of the learned feature fusion strategy in the interpretable module. The backbone with the interpretable module performs much better than without the interpretable module.

Conclusion: We propose a co-segmentation framework for PET-CT segmentation, and introduce the perturbation-based interpretation method to learn how to integrate features from multi-modality effectively. The proposed network with the learned feature fusion strategy achieved superior performance compared to the baseline.

Funding Support, Disclosures, and Conflict of Interest: This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant Nos. 62071197 and 61672253.

ePosters

    Keywords

    Not Applicable / None Entered.

    Taxonomy

    Not Applicable / None Entered.

    Contact Email

    Share: