Purpose: To remove the interference of irrelevant organs, we train an organ extraction neural network (OrXnet) that isolates the lung projection from the composite X-ray projection images.
Methods: 1018 CT scans of 1011 patients were obtained from LIDC-IDRI. An open-source pretrained lung segmentation network (lungmask) was used to automatically delineate the lung volume. 2D anterior-posterior (AP) projections were simulated on GPU via raytracing based on a generic geometry of 1828 mm source-to-detector distance, 200 mm pixel size. Ideal point-source was assumed, and scattered photons were ignored. Separate isolated lung projections were generated by only including the delineated lung volume in raytracing. We excluded defective cases, where the lung segmentation was inaccurate, lung area was not within the field-of-view, or artifacts were noticeable, leaving 930 valid scans, which were then split into training, validation, and testing set of 730, 100, 100 scans, respectively. A U-Net was then trained to transform the full projections to lung-only projections with MSE loss function. In addition to visual examination of the results, we tested OrXnet with PSNR and SSIM metrics between the ground-truth and the predicted lung projections in the test set.
Results: The predicted results generally showed a high fidelity with the ground-truth lung projections and revealed anatomies imperceptible in the original projections. A PSNR score of 29.666 and SSIM score of 0.9654 were achieved. 6% predicted lung-only projections showed missing information as holes in the lung, indicating space for further improvement.
Conclusion: This work presents a novel OrXnet that distilled a single organ projection from the composite chest radiograph. The study exemplified high fidelity lung extraction that revealed subtle structures that otherwise were obscured by overlapping anatomies. The novel network thus may find applications in diagnosis and real-time image-guided radiotherapy for patient positioning and target localization.