Click here to

Session: AI in Imaging [Return to Session]

An Effective Deep Learning Framework for Lung Tumor Segmentation in 4D-CT

S Momin*, Y Lei, Z Tian, T Wang, J Roper, A Kesarwala, K Higgins, J Bradley, T Liu, X Yang, Department of Radiation Oncology and Winship Cancer Institute, Emory Univ, Atlanta, GA

Presentations

TH-D-TRACK 3-1 (Thursday, 7/29/2021) 2:00 PM - 3:00 PM [Eastern Time (GMT-4)]

Purpose: To present a deep-learning-based framework for tumor auto-segmentation in 4D lung CT images.

Methods: We proposed a motion convolutional neural network (CNN) to perform lung tumor segmentation in 4D CT images. The unique network design first extracts tumor motion information from with consecutive 4DCT phases for input into an integrated backbone network architecture. The extracted motion information is advanced into the subsequent global and local motion head network architectures, respectively, to estimate the corresponding deformation vector fields. A self-attention strategy was incorporated in the final mask head network to remove any noisy features that might degrade segmentation performance. The mask head architecture was supervised by segmentation loss, which considers both, dice loss and binary cross entropy loss. A 4D-CT dataset with 200 images from 20 lung cancer patients was used to evaluate our method using cross-validation and hold-out experiments. The accuracy of the auto-segmented contours were benchmarked against expert contours drawn by physicians using the center-of-mass-distance (CMD), Dice-similarity-coefficient (DSC), 95th percentile Hausdorff distance (HD95) and mean surface distance (MSD). The performance of our approach was statistically compared against the U - Net architecture via paired t-test.

Results: For all patients, our method framework yielded good overall agreement with contours drawn by the physicians. Our method yielded a CMD, DSC, HD95, and MSD of 0.71±0.36mm, 0.91±0.07, 1.82±0.47mm, and 0.45±0.16mm, respectively. Our method significantly outperformed the U-Net architecture in terms of segmentation accuracy for both cross-validation (p < 0.05) and hold-out (p < 0.05) experiments.

Conclusion: We developed a novel motion CNN – based approach for fully automated 4D tumor delineation for lung cancer patients and demonstrated its feasibility and reliability. The promising results of this work provide impetus for its potential applications in both current clinical practice and for real-time, image-driven automatic contouring to advance adaptive lung radiotherapy.

Handouts

    Keywords

    Not Applicable / None Entered.

    Taxonomy

    Not Applicable / None Entered.

    Contact Email

    Share: