Click here to

Session: Multi-Disciplinary General ePoster Viewing [Return to Session]

Using StyleGAN for Unpaired Image Translation Between CBCT and CT for Adaptive Proton Treatment

D Tang*, T Bortfeld, S Yan, Massachusetts General Hospital and Harvard Medical School, Boston, MA


PO-GePV-M-153 (Sunday, 7/10/2022)   [Eastern Time (GMT-4)]

ePoster Forums

Purpose: To enable adaptive radiotherapy in proton treatments, we investigate the use of style transfer to generate synthetic computerized tomography (sCT) images from cone-beam CT (CBCT) inputs. We propose two methods, Encoder Bootstrapping and Latent Vector Style Mixing, both employing StyleGAN.

Methods: Encoder Bootstrapping injects the latent vector of an input CBCT image into a StyleGAN to generate sCT images, preserving the anatomical structure of the CBCT image while transferring the style of CT images. Latent Vector Style Mixing achieves the same objective by mixing style layers of a CBCT latent vector with those of a sCT on a StyleGAN that is trained on both modalities. These frameworks, which utilize StyleGAN2-ADA generators and ResNet encoders, were trained using CBCT and planning CT images from 146 prostate cancer patients and tested on 40 independent patients. Planning CT axial slices were deformably registered to CBCT slices and served as the ground truth. The mean absolute error (MAE) and root-mean-squared error (RMSE) were calculated to evaluate sCT quality. Results were compared to those from unpaired image training method CycleGAN. Proton treatment planning using sCT will also be explored.

Results: CycleGAN had the strongest performance, followed by Latent Vector Manipulation and Encoder Bootstrapping with respect to MAE and RMSE. Latent Vector Manipulation most accurately preserved input CBCT structure.

Conclusion: While the results of preliminary StyleGAN training suggest underperformance relative to CycleGAN, it’s paramount to highlight that the latent space provides users an unprecedented amount control over the behavior of a machine learning model when compared to the black box of CycleGAN. Specific layers of latent vectors which determine anatomical structure, Hounsfield Unit distribution, and image texture can be manually tuned. Both proposed style transfer methods leverage StyleGAN’s inherent ability to manipulate style using the latent space and will be further developed to improve performance.

Funding Support, Disclosures, and Conflict of Interest: I have a research agreement with RaySearch on proton treatment planning and RayIntelligence


Cone-beam CT


IM/TH- Cone Beam CT: Machine learning, computer vision

Contact Email