Purpose: Stent has been often used as an internal surrogate to monitor intra-fractional tumor motion during pancreatic cancer radiotherapy. Based on the stent contours generated from planning CT images, current intrafraction motion review (IMR) system on Varian TrueBeam only provide a tool to verify the stent motion but lack of quantitative information visually. The purpose of this study is to develop an automatic stent recognition method for quantitative intra-fractional tumor motion monitoring in pancreas cancer treatment.
Methods: 466 IMR images from 11 pancreatic cancer patients were retrospectively selected in this study, with the manual contour of the stent on each image served as the ground truth. An objective attention modeling mechanism was integrated into the U-net framework to deal with the optimization difficulties when training a deep network with 2D kV IMR images and limited training data. For supervision, we used the binary cross‐entropy loss combined with a Dice loss for training. The deep neural network was trained to capture more contextual information to predict binary stent masks. 68% of 466 images were randomly selected for training (85%, 269 images) and validation (15%, 47 images), while the rest (150 images) were used for testing.
Results: Our stent segmentation results were compared with the manually segmented contours. The trained model achieved a detection rate of 93%.The mean stent Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), 2D mean surface distance (MSD), and residual mean square distance (RMSD) were 0.91 ± 0.08, 1.12 ± 1.84 mm, 0.26 ± 0.43 mm, and 0.38 ± 0.59 mm, respectively.
Conclusion: We developed a novel deep learning‐based approach to segment the stent from kV IMR images automatically, demonstrated its clinical feasibility, and validated its accuracy compared to manual segmentation. The proposed technique could be a useful tool for quantitative intra-fractional motion monitoring in pancreatic cancer radiotherapy.
Not Applicable / None Entered.