Purpose: Cone beam CT (CBCT) has been widely implemented to guide the daily delivery of lung radiotherapy including Stereotactic Body Radiation Therapy (SBRT), however, clinical applications that depend on quantitative CBCT are limited by poor tissue contrast, image artifacts, and instability of Hounsfield Unit (HU) values. The purpose of this study is to develop a deep learning-based approach to improve CBCT image quality for quantitative analysis during radiotherapy.
Methods: A deep learning-based model, which integrates histogram matching and perceptual supervision into a cycle-consistent adversarial network (Cycle-GAN), was trained to learn the mapping between paired thoracic CBCT and planning CT images. Cycle-GAN was introduced due to its ability to learn a mapping between CBCT images and paired planning CT images. Perceptual supervision was adopted to suppress interface blurring. Histogram matching was introduced into Cycle-GAN to force the synthetic CT (sCT)’s intensity distribution to be close to that of planning CT. The proposed deep learning-based algorithm was evaluated using 80 thoracic breath-hold CBCT images from 20 lung SBRT patients, each with 3-5 CBCTs. Planning CT images were used as the ground truth for evaluating the sCTs derived from co-registered CBCTs.
Results: The mean absolute error (MAE), peak-signal-to-noise-ratio (PSNR) and normalized cross-correlation (NCC) indexes calculated between the sCTs and planning CTs were 63.2 HU, 30.2 dB, and 0.96, respectively. The proposed sCT method outperforms other current deep-learning methods and can generate a quantitatively accurate sCT in seconds.
Conclusion: We developed a new CBCT correction method to generate a quantitatively accurate thoracic sCT from a CBCT and demonstrated that this method could significantly improve CBCT image quality. This method has the potential to enhance the accuracy of CBCT-guided radiotherapy, in particular for online adaptive applications that depend on accurate, real-time HU values for dose calculations.