Click here to

Session: Imaging General ePoster Viewing [Return to Session]

Sparse-View CT Reconstruction Via Generative Adversarial Network (GAN) Using Fully Convolutional DenseNet (FC-DenseNet)

I Park*, J Chun, B Choi, S Yoo, J Kim, H Kim / Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea

Presentations

PO-GePV-I-21 (Sunday, 7/25/2021)   [Eastern Time (GMT-4)]

Purpose: Sparse-view CT benefits from reduction in dose of radiation to patient with fewer projection views than general. Conventional filtered-back projection (FBP) algorithm is limited due to the insufficient projection information required to sparse-view CT reconstruction. The iterative reconstruction complement the disadvantage, while it is mostly time-inefficient and challenging to dramatically reduce the number of projection views. Deep-learning-based sparse-view reconstruction could address the drawbacks. Though various trials made thus far, this work proposes a reconstruction with Generative Adversarial Network (GAN) based model that utilizes FC-DenseNet as a backbone network architecture.

Methods: The input and output of the network were set to be sparse-view CT (45-view), and ground-truth CT (720-view), which were obtained from FBP algorithm. Patient cohort consisted of 74 scans of head-and-neck patients: 37, 28, and 9 scans used for training, validating, and testing the proposed network. We trained GAN using FC-DenseNet and used L1 loss, L2 loss, and gradient difference loss to obtain training results. The metrics for reconstruction accuracy of the trained network employed Mean Absolute Error (MAE), Peak Signal to Noise Ratio (PSNR), and Structural Similarity Index (SSIM).

Results: The results indicated that Our proposed model can improve the sparse-view CT image quality with minimal streaking artifacts. The generated image for our proposed model was assessed by MAE, PSNR, and SSIM. The original sparse-view CT reconstruction image for conventional FBP algorithm results 53.03, 51.28, 0.97. The model with L1 loss results 16.88, 44.52, 0.86. The model with L2 loss results 24.41, 36.59, 0.92. The model with gradient difference loss results 17.10, 31.86, 0.59.

Conclusion: Our proposed network adopted GAN network architecture for sparse-view CT image reconstruction. It demonstrated that the trained network could achieve much more accurate reconstructed image than conventional FBP.

Keywords

Reconstruction

Taxonomy

IM- CT: Image Reconstruction

Contact Email

Share: