Click here to

Session: Multi-Disciplinary General ePoster Viewing [Return to Session]

MR-CT Image Registration by Using Unsupervised Deep Learning Model

M Tavakoli1*, S Abbasi2, H Hamid Reza Boveiri3, M Mosleh-Shirazi4, A Mehdizadeh5, (1) University of Pittsburgh School of Medicine and UPMC Hillman Cancer Center, Pittsburgh, PA, USA, PA, (2) Shiraz University of Medical Sciences, Shiraz, IR, (3) Sama College, Iau, Shoushtar Branch, (4) Shiraz University of Medical Sciences, Shiraz, IR, (5) Shiraz University Of Medical Sciences,


PO-GePV-M-181 (Sunday, 7/10/2022)   [Eastern Time (GMT-4)]

ePoster Forums

Purpose: Image registration is an important task for many clinical image-guided interventions. However, it is a challenging because of elaborated and unknown relationships between different imaging modalities. Currently, supervised deep learning is a well-known method at which the registration is conducted in end-to-end manner and one-shot. However, to implement it a huge ground-truth is required to guarantee higher accuracy of deep neural networks for registration. Moreover, supervised methods may yield models that bias towards annotated structures. A possible choice of work in this case is using unsupervised approach.

Methods: In this study, we have designed a novel deep unsupervised Convolutional Neural Network (CNN)-based model based on computer tomography/magnetic resonance (CT/MR) of brain images in an affine manner. The proposed model encompasses 5 consecutive modules including a concatenator, a CNN as the localization network, a combination of grid-generator and resampler, a customized loss function to conduct the training process, and finally an Adam optimizer to conduct the stochastic gradient-descent with back-propagation algorithm. Here, we created a dataset consisting of 1000 pairs of CT/MR slices from the brain of 100 neuropsychic patients. At the next step, 12 landmarks were chose by a well-experienced radiologist and annotated on each slice, resulting in the computation of both target registration error (TRE) and Dice similarity.

Results: The proposed method could register the multimodal images with Dice similarity 0.8218 and TRE 9.6231 that are appreciable for clinical applications. Moreover, the approach registered the images in an acceptable time 203ms and can be appreciable for clinical usage due to the short registration time and high accuracy.

Conclusion: Here, the results illustrated that our proposed method achieved competitive performance against other approaches at reasonable computation time.


CT, Registration, MRI


Not Applicable / None Entered.

Contact Email