Purpose: Traditionally, a tomographic image is formulated as an inverse problem for a given set of measured data from different angular views. Here, we propose a geometric-integrated deep learning model for tomographic imaging of 2D-3D reconstruction with ultra-sparse sampling across various patients.
Methods: We propose a geometry-informed deep learning framework for 3D tomographic image reconstruction. Specifically, the proposed framework consists of three modules: a) a 2D projection generation network is developed to learn to generate novel-view projections from the given sparse views; b) a geometric back-projection operator transforms the 2D projections to 3D images, referred to as geometric preserving images, which geometrically relates the pixelated 2D input data to the corresponding ray lines in 3D space; and c) a 3D image refinement network learns to refine and generate the final 3D images. To evaluate the proposed approach, we experiment on a public CT dataset containing 1018 lung CT images, where 80% and 20% data are for training and testing, with the projection images are digitally produced from CT images using cone-beam CT geometry.
Results: We deploy the trained model on the held-out testing set for few-view 3D image reconstruction, and compare the results with ground truth. For single-/two-/three-view reconstruction, average NRMSE / SSIM / PSNR values over all testing data are 0.368 / 0.734 / 20.770, 0.300 / 0.807 / 22.687 and 0.274 / 0.838 / 23.669, respectively. Beyond, we observe the proposed model could generate images closely to the target images although anatomic structures of different patients have a large variance, indicating the potential of the proposed model for volumetric imaging even with few views.
Conclusion: We present a novel geometric-integrated deep learning model for volumetric imaging with ultra-sparse sampling across patients, which may present a useful solution in image-guided interventional procedures with simplified imaging system design.
Funding Support, Disclosures, and Conflict of Interest: The authors acknowledge the funding supports from Stanford Bio-X Bowes Graduate Student Fellowship, National Cancer Institute (R01CA227713), Google Faculty Research Award, and Human-Centered Artificial Intelligence of Stanford University.
Not Applicable / None Entered.