Purpose: Despite high temporal resolution for real-time tumor monitoring, the utility of 2D radiographs is limited in image-guided radiotherapy due to overlapping anatomies in projection images. For thoracic radiotherapy, rib shadow often obfuscates the visibility of the lung tumor. This study develops a machine learning model for rib shadow suppression (RSS).
Methods: 490 CT scans and rib cortical bone contours were obtained from RibFrac challenge and RibSeg dataset, respectively. To generate ground truth rib-removal projections, we replace the rib contours with muscle equivalent materials of HU=45. Anterior-posterior (AP) projections of the original and rib-removal CT were calculated on GPU to simulate 2D projections of an onboard imager (OBI). The detector dimension is 2048-by-2048 with 0.2 mm pixel size. The source-to-detector distance and patient-to-detector distance are 1828 mm and 200 mm, respectively. Scattered photons were neglected. We used the same number of sampling points for all detector elements to maximize GPU parallelization. A U-Net was trained based on the original and rib-removal projections using the mean squared error (MSE) loss function. The 490 pairs of AP projections were divided into 420, 70, and 70 pairs for training, validation, and testing, respectively.
Results: The rib shadow is visibly suppressed in the predicted projection images, improving the conspicuity of underlying lung tissue. Quantitatively, we achieved the MSE of 2.93e-5 for training, 1.67e-4 for validation, and 1.53e-4 for testing. As a reference, we give that the MSE between the original and ground-truth rib-removal projections in the test set is 1.14e-3, 7.45 times the testing MSE. The mismatch might be due to special cases with surgery.
Conclusion: This work presents a novel BSS neural net that effectively mitigates the rib shadow in the simulated OBI images and highlights the underlying lung anatomies for image-guided lung radiotherapy.