Click here to

Session: Multi-Disciplinary General ePoster Viewing [Return to Session]

Interpretable Feature-Specific Generative Adversarial Network Model for 3D CBCT Image Enhancement for Radiomics

Z Qiao1*, Z Zhang2, Z Jiang3, Y Lai4, J Lee5, D Wu6, C Beltran7, L Ren8, M Huang9, (1) University of Florida, Gainesville, FL, (2) Duke University, Durham, NC, (3) Duke University, Durham, NC, (4) University of Texas at Arlington, Arlington, TX, (5) Duke Radiation Oncology, ,,(6) University of Florida, ,,(7) Mayo Clinic, Jacksonville, FL, (8) University of Maryland, Baltimore, MD, (9) Mayo Clinic Florida, Jacksonville, FL

Presentations

PO-GePV-M-348 (Sunday, 7/10/2022)   [Eastern Time (GMT-4)]

ePoster Forums

Purpose: This study aimed to develop a feature-specific deep learning model to enhance the daily CBCT image quality to improve its accuracy for radiomics analysis for precision medicine.

Methods: We developed a region of interest (ROI) focused 3D Pix2Pix GAN with interpretable feature-specific loss function to improve CBCT images for radiomics analysis. The key innovation is that specific radiomic features are used as the loss function implemented in the GAN model. This model was trained with 29 lung patients’ CT (TCIA) and Monte Carlo simulated CBCT image pairs and tested with 5 patients. Each CT image set was projected into 899 projections and reconstructed into CBCT with Varian iTools under clinical settings. Image augmentation was then performed by translation and rotation for each patient. The enhanced CBCT (eCBCT), with CT as the ground truth, was generated by the interpretable feature-specific GAN. For each test patient, 1032 radiomic features including 86 base-level, and 946 LoG and Wavelets features were then extracted for analysis from CBCT and eCBCT images and compared to CT images with percentage error.

Results: Radiomics errors of the first 86 base-level features out of 1032 features including First-Order-Statistics 18 features, Gray-Level-Cooccurrence-Matrix 22 features, Gray-Level-Run-Length-Matrix 16 features, Gray-Level-Size-Zone-Matrix 16 features, and Gray-Level-Dependence-Matrix 14 features were compared between CBCT and eCBCT. The 2D GAN model reduced radiomics error by 20%, 4.16%, 0.2%, 2.9% and 11.3% respectively on average for each feature category. The 3D-ROI l1 model further reduced the error by 3.18%, 8.95%, 5.45%, 10.83% and 5.63%, respectively. Lastly, the 3D feature-specific model focusing on high level LoG 4mm skewness feature loss specifically reduced this specific feature’s average error from 31.97% (ROI-L1 loss model) to 6.85%.

Conclusion: The 3D feature-specific GAN model achieved the best performance for image enhancement, demonstrating the feasibility of feature-specific image enhancement for accurate radiomics analysis.

Funding Support, Disclosures, and Conflict of Interest: NIH Grant: R01-EB028324, R01-EB001838, R01-CA226899

Keywords

Cone-beam CT, Quantitative Imaging

Taxonomy

IM/TH- Cone Beam CT: Radiomics

Contact Email

Share: