Click here to

Session: Deep Learning for Image Quality in Treatment Planning [Return to Session]

Deep Learning-Based Contrast Enhanced Dual-Energy CT Imaging From Non-Enhanced Single-Energy CT

H Xie1, Y Lei1, T Wang1, J Roper1, B B Ghavidel1, M McDonald1, D S Yu1, X Tang2, J Bradley1, T Liu1, X Yang1*, (1) Department of Radiation Oncology and Winship Cancer Institute, (2) Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University School of Medicine, Atlanta, GA

Presentations

SU-H330-IePD-F5-5 (Sunday, 7/10/2022) 3:30 PM - 4:00 PM [Eastern Time (GMT-4)]

Exhibit Hall | Forum 5

Purpose: Contrast enhanced dual-energy CT (CEDECT) are often ordered by radiation oncologists as an indispensable imaging modality to aid delineation of targets and organs-at-risk (OARs) in radiotherapy. However, this would increase imaging dose, labor and financial costs, as well as the likelihood of clinical consequence such as contrast induced acute kidney injury. The purpose of this work is to propose a deep-learning based method for CEDECT imaging from non-enhanced single-energy CT (NonSECT).

Methods: A cohort of 60 head and neck cancer patients were retrospectively investigated. During radiotherapy simulations of these patients, CEDECTs were acquired by a twin-beam CT scanner after 7~20 minutes of the NonSECT planning scans. The NonSECTs were fused to the CEDECTs via rigid and deformable registration; and they are treated as the input and ground truth images (gtH and gtL). A deep learning-based method, namely Dual-Net, was proposed to generate the synthetic high and low images (synH and synL). The Dual-Net was trained by several supervision mechanisms that optimize the learnable parameters. Two generators and two discriminators were implemented in the framework to build the forward and backward mapping between the CEDECT and NonSECT images and to query the quality of the synthetic CEDECT images, respectively.

Results: Qualitative and quantitative evaluations demonstrated the feasibility of the proposed framework for CEDECT images generation from NonSECT. Mean absolute error, and structural similarity index measurement from the ground truth images were 23.27±7.86 HU and 0.969±0.026 for synH, and 25.85±10.95 and 0.968±0.027 for synL, respectively. Peak signal-to-noise ratio of the synH and synL images were 25.42±2.73 and 25.11±2.83, respectively.

Conclusion: A deep-learning based method is proposed and validated for synthesizing CEDECT from NonSECT. The proposed approach is promising to remove the CEDECT scan during radiotherapy simulation, while holding the ability to provide synthetic CEDECT images.

Keywords

Not Applicable / None Entered.

Taxonomy

Not Applicable / None Entered.

Contact Email

Share: