Purpose: To develop a deep learning based algorithm that can automatically localize and segment rectal tumors based on multi-parametric MRI images. The performance was further validated by comparing with the gold-standard ground-truth as manual segmentation from an expert radiologist.
Methods: A total of 197 patients with locally advanced rectal cancer were included. The imaging protocol included T1-weighted (T1s), T2-weighted (T2w), four phases dynamic-contrast enhanced (DCE) and dynamic-weighted imaging (DWI) with two b values. A network with two combined serials of U-nets was proposed. Image patches at the same position from all six frames were used as the input for the first layer. The results were fed into the second layer to verify the probability of the middle patch belonging to the lesion. The ground-truth was outlined by an experienced rectal MRI radiologist. Ten-fold cross-validation was used to evaluate the performance of the classifier.
Results: The validation group was used to optimize the hyperparameters such as learning rate, decay rate, and epochs by maximizing the object function. The Dice Similarity Coefficient (DSC) was used to compare the results of the proposed algorithm with the ground-truth. Among all 197 patients, the mean of DSC could achieve 0.82 with range of [0.21,0.97], significantly improved from previous reported work using single modality alone.
Conclusion: Our work showed the deep-learning with combined image sequence can provide as a promising tool for fully automatic tumor localization and segmentation for rectal cancer. This efficient and reliable tumor segmentation method may provide a fundamental step to quantitatively extract imaging information and further to assess patient risk or benefit in getting personalized treatment.
MRI, Image Processing, Segmentation