Purpose: To propose an unsupervised deep learning-based image registration method for MRI-CT head neck (HN) image registration using image self-similarity descriptors.
Methods: The MRI and CT images were first separately processed to extract self-similarity descriptors for MRI and CT, respectively. The self-similarity descriptors were obtained by constructing normalized local intensity differences, which was used as local structural descriptors. The self-similarity descriptor for MRI and CT could effectively represent the local image features using the same gray scale, which enables traditional similarity metrics to be used for similarity calculation. An unsupervised network takes the concatenated similarity descriptors and the original images as input and directly predict the deformation vector field (DVF). The network was trained by minimizing the normalized cross correlation loss between the deformed MRI and fixed CT and an additional DVF regularization loss. The network was trained using 20 patients’ datasets and evaluated on a separate 10 patients’ datasets.
Results: Our results show that the image alignment at the shoulder region and the image boundaries has greatly improved after registration. Landmarks located at the center of the humeral head and tip of the axis vertebra dens were manually selected to calculate the landmark distance errors. The average landmark distance errors were reduced from 6.13 to 2.92 mm at the center of humeral head, and from 3.98 to 2.62 mm at the tip of the axis dens.
Conclusion: We have developed a novel unsupervised multimodal image registration method for MRI-CT HN image registration. Taking the combined MRI and CT image and their respective self-similarity descriptors as input, the network performs direct DVF prediction to deformably align the MRI HN images to the CT images for facilitating HN cancer treatment planning.
Not Applicable / None Entered.
Not Applicable / None Entered.