Click here to

Session: Multi-Disciplinary General ePoster Viewing [Return to Session]

A Two-Step Method to Improve Image Quality of CBCT with Phantom-Based Supervised and Patient-Based Unsupervised Learning Strategies

Y Liu1,2*, X Chen1, J Zhu1, B Yang1, R Wei1, R Xiong2, H Quan2, Y Liu1, J Dai1, K Men1, (1) National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences, Beijing, 100021 ,CN, (2) Wuhan University, Wuhan, 430072 ,CN,

Presentations

PO-GePV-M-287 (Sunday, 7/10/2022)   [Eastern Time (GMT-4)]

ePoster Forums

Purpose: In this study, we aimed to develop deep learning framework to improve cone-beam computed tomography (CBCT) image quality for adaptive radiation therapy (ART) applications.

Methods: Paired CBCT and planning CT images of 2 pelvic phantoms and 91 patients (15 patients for testing) diagnosed with prostate cancer were included in this study. First, well-matched images of rigid phantoms were used to train a U-net, which is the supervised learning strategy to reduce serious artifacts. Second, the phantom-trained U-net generated intermediate CT images from the patient CBCT images. Finally, a cycle-consistent generative adversarial network (CycleGAN) was trained with intermediate CT images and deformed planning CT images, which is the unsupervised learning strategy to learn the style of the patient images for further improvement. When testing or applying the trained model on patient CBCT images, the intermediate CT images were generated from the original CBCT image by U-net, and then the synthetic CT images were generated by the generator of CycleGAN with intermediate CT images as input. The performance was compared with conventional methods (U-net/CycleGAN alone trained with patient images) on the test set.

Results: The proposed two-step method effectively improved the CBCT image quality to the level of CT scans. It outperformed conventional methods for region-of-interest contouring and HU calibration, which are important to ART applications. Compared with the U-net alone, it maintained the structure of CBCT. Compared with CycleGAN alone, our method improved the accuracy of CT number and effectively reduced the artifacts, making it more helpful for identifying the clinical target volume.

Conclusion: This novel two-step method improves CBCT image quality by combining phantom-based supervised and patient-based unsupervised learning strategies. It has immense potential to be integrated into the ART workflow to improve radiotherapy accuracy.

Funding Support, Disclosures, and Conflict of Interest: This work was supported by the National Natural Science Foundation of China (12175312, 11975313, 12005302), the Beijing Nova Program (Z201100006820058), and CAMS Innovation Fund for Medical Sciences (2020-I2M-C&T-B-073, 2021-I2M-C&T-A-016).

Keywords

Cone-beam CT, Computer Vision, Image-guided Therapy

Taxonomy

IM- Cone Beam CT: Machine learning, computer vision

Contact Email

Share: