Purpose: Image guidance has been widely used in radiation therapy. Correctly identifying the bounding box that contains a region of interest (ROI), is an essential step in classical and deep learning image analysis. Unfortunately, not many anatomical structures have isotropic shapes. It is challenging to select feature maps from a single scale to detect an anisotropic structure. In this study, we improved our detection network architecture presented in AAPM2020 to achieve a better overall detection performance from CT images.
Methods: We improved the network architecture by fusing the feature maps from multiple scales and letting data to decide how to combine them. To detect anisotropic shapes, this fusion was done independently in left-right, anterior-posterior and superior-inferior directions. We applied our network to two applications: 1) T12 detection for checking gross misalignments and 2) the prostate, bladder and rectum detection for adaptive therapy. The model of T12 detection was trained on thirty-five abdominal CT scans from the same imaging protocol and tested on fourteen CT scans. The model of the prostate, bladder and rectum detections was trained on thirty training CT data and tested on six testing datasets. In both applications, no test images were used during training.
Results: Compared with the previous approach, our latest one reduces the box size uncertainty by ⅓ to ½; all mean box size errors are reduced, especially on T12 (11.07 mm to 5.33mm). The compromise of the center accuracy is negligible (<1mm).
Conclusion: Our latest approach improves the reliability of accurately detecting structures with anisotropic shape in multiple sites for various applications. This can help to avoid major positioning errors in IGRT. Also the improved network demonstrated the ability to simultaneously detect multiple objects of a wide range of scales.
Not Applicable / None Entered.