Purpose: In general, parametric maps are derived from dual-energy (DE) computed tomography (CT) which has potential to differentiate materials. This study aims to develop deep generative models for parametric maps from single-energy (SE) CT with clinically used dual-layered DE-CT spectral results.
Methods: Pix2Pix conditional generative adversarial network (cGAN) and U-net were used for generating parametric maps from conventional polychromatic CT data. For this purpose, effective atomic number (EAN) and 120 kVp CT data of 63 patients were collected from Philips iQon spectral results including: head & neck, chest and abdomen & pelvic images. For training the relationship between parametric maps and CT values, total sets of images were processed with cGan and U-net, then U-net was applied to head & neck and chest images respectively. This study was implemented under IRB approval. The generated parametric maps were compared to the ones from the original spectral results. Image profiles of the input and output data were obtained and compared. The model performance was then assessed with R-square and pixel similarity values.
Results: The preliminary results illustrated that U-net model outperformed the cGAN model. Compared to original parametric maps, the image profiles generated from U-net presented more smoothed lines. For head & neck and chest EAN maps, the R-square values were 0.9836 and 0.9793 and the pixel similarity values were 0.9010 and 0.8748, respectively.
Conclusion: We developed deep learning model for generating parametric maps from conventional CT images and the accuracy of the model was evaluated. Further study consists of generating relative electron density with the commercial DE-CT spectral results and in-house calculated parametric maps.
Funding Support, Disclosures, and Conflict of Interest: This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (No. NRF-2019M2A2B4095126)