Click here to

Session: Multi-Disciplinary General ePoster Viewing [Return to Session]

A 3D Multi-Modality Lung Tumor Segmentation Method Based On Deep Learning

S Wang1*, L Yuan2, E Weiss3, R Mahon4, (1) Virginia Commonwealth University, Richmond, VA, (2) Virginia Commonwealth University Medical Center, Richmond, VA, (3) Virginia Commonwealth University, Richmond, VA, (4) Washington University in St. Louis, St. Louis, MO

Presentations

PO-GePV-M-8 (Sunday, 7/25/2021)   [Eastern Time (GMT-4)]

Purpose: To develop an automatic lung tumor segmentation method based on multi-modality images and deep learning.

Methods: A 3D segmentation neural network using PET and CT multi-modality image inputs were constructed based on the U-Net. The network architecture consists of two parallel convolution arms for independent feature extraction from the simulation CT and the diagnostic PET at multiple resolution levels. The extracted features were concatenated and fed into a single deconvolution path. The resulting tumor probability map was post-processed through the conditional random field and thresholded to obtain the tumor segmentation. The performance of this 3D multi-modality network was compared with that of the previously constructed 2D multi-modality network and the 2D and 3D CT-only single-modality networks. Furthermore, a volume-stratified training/validation strategy was evaluated for segmentation performance improvement.The networks were trained/validated/tested (3:1:1 split ratio) on 290 pairs of diagnostic PETs and planning CTs (166 SBRT and 124 conventionally-fractionated) from lung cancer patients treated at our clinic, with manual physician contours as the ground truth. The segmentation performance of the networks was evaluated via the DICE similarity coefficient (DSC) and the Hausdorff Distance (HD).

Results: The mean DSC and HD of the single-modality models were 0.75±0.16 and 8.5±5.8 mm for 2D, and 0.77±0.12 and 7.6±4.7 mm for 3D, respectively. The mean DSC and HD of the multi-modality networks were 0.78±0.15 and 7.6±4.7 mm for 2D, and 0.79±0.10 and 5.8±3.2 mm for 3D, respectively.The small GTVs benefited from separate training/validation with a threshold of 25mL identified as ideal. The best overall performance was obtained by the multi-modality volume-stratified 3D model, with the overall DSC and HD of 0.83±0.09 and 5.9±2.5 mm, respectively.

Conclusion: The 3D multi-modality deep learning network outperformed its 2D counterpart and the single-modality networks. The volume-based stratification can further improve the segmentation results.

Funding Support, Disclosures, and Conflict of Interest: Conflict of Interest: None Disclosures: Dr. Elisabeth Weiss: NIH research grant, UpToDate royalties, Viewray research funding

ePosters

    Keywords

    Segmentation, Lung, PET

    Taxonomy

    IM/TH- image Segmentation: Multi-modality segmentation

    Contact Email

    Share: