Click here to

Session: [Return to Session]

Unsupervised CBCT-CT Synthesis for Prostate Radiotherapy Treatment with Contrastive Learning

Y Pang*, X Chen, T Royce, A Wang, G Szalkowski, S To, P Yap, J Lian, University of North Carolina, Chapel Hill, NC

Presentations

PO-GePV-M-157 (Sunday, 7/10/2022)   [Eastern Time (GMT-4)]

ePoster Forums

Purpose: Cone-beam computed tomography (CBCT) is an essential tool for image-guided radiotherapy, as a patient’s anatomy can change drastically during the course of treatment. However, the pelvic CBCT image quality is much worse than planning CT, which challenges localizing the soft tissue accurately for treatment. We propose a solution based on new dual contrastive learning adversarial generative netwok (DCLGAN) to synthesize planning CT-like images while preserving anatomical structures presented in treatment CBCT images.

Methods: Contrastive learning has gained increasing attention in computer vision for its power in learning representations efficiently. We use a contrastive learning model (DCLGAN) to establish the relationship between two image modalities by maximizing mutual information and reconstruct the learned representation into a CT-styled image. The network structure enables the model to be trained more efficiently and the anatomical structures are more evident than traditional cycle generative adversarial network (CycleGAN). Our model was trained on 40 CBCT and 40 planning CT images from previous prostate cancer treatments. We used 5 paired CBCT and CT images for validation and 10 paired images for evaluation purposes.

Results: We compared the synthetic CT with the original CBCT and CT image. The average peak signal to noise ratio (PSNR, the higher the better) between synthetic CT and reference CT images is 20.17±2.32, while PSNR between original CBCT and CT images is 8.14±2.90. The mean average error (MAE, the lower the better) between synthetic CT the original CT image is 11.52±4.92, while the MAE between original CBCT and original CT images is 40.21±11.73.

Conclusion: DCLGAN efficiently reduces the noise and artifacts from CBCT images and synthesizes CT-styled images with the content transferred from CBCT images. This approach can create high-quality CT images for helping more accurate radiotherapy treatment and adaptive replanning.

Funding Support, Disclosures, and Conflict of Interest: This project is in part supported by NIH 1R01CA206100.

Keywords

Not Applicable / None Entered.

Taxonomy

IM/TH- Image Analysis (Single Modality or Multi-Modality): Machine learning

Contact Email

Share: