Purpose: Ultrasound (US) imaging shows great potential for intra-fractional motion management in radiotherapy, due to its high temporal resolution. Progress in this area, however, is hindered by limited lesion contrast in US images that challenges tumor tracking. This study presents a novel hybrid learning-based method, which combines the fast block matching and deep learning to address the challenges of motion tracking using US imaging.
Methods: In our hybrid learning-based model, an unsupervised fast block matching module is first used to roughly detect the potential regions-of-interest (ROIs). A deep learning-based model, motion regional convolutional neural network (R-CNN), is used to refine the location of detected ROIs and perform tracking target detection therein. The motion R-CNN is trained by several supervision mechanisms that optimize the learnable parameters to refine the ROIs and track the movement within the ROIs. An evaluation was performed on a public dataset (MICCAI CLUST 2015 dataset). This contains US liver images collected by seven US scanners using eight types of transducers. Twenty 3D US image sequences from 18 subjects were selected to test the performance of the proposed method. Tracking error (TE) was calculated as the distance between the tracked location and the ground truth location of landmarks. The ground truth locations were manually annotated by three observers on 10% of the images. These annotations underwent a quality check and a correction if necessary to ensure optimal evaluation conditions.
Results: The mean TE for the proposed method was 1.68±0.79 mm for the 3D sequences.
Conclusion: The feasibility and accuracy of proposed method was demonstrated. This workflow has potential to enable and facilitate adopting US imaging into routine motion management in radiotherapy.
Not Applicable / None Entered.