Purpose: Therapy planning can be improved by localizing tumor and risk structures more precisely than currently achievable. One possibility to increase image quality is using multimodality image fusion based on aligned images that are produced by artificial neural networks.
Methods: CT and MRI scans of patients suffering from tumors in the neck and head area are used for image registration and fusion. In the registration, an automatic preparatory step ensures a rough rigid alignment and the equality of the image formats. The following convolutional neural network contains several filter layers to extract similar features of the input image pairs, with the result of a displacement field. This procedure is iterated many times. The lowest value of the loss function represents the best registration model. To assess the registered images, segmented images are used, and the Dice similarity coefficient (DSC) with respect to the fixed images is computed. Afterwards, a discrete wavelet transform (DWT) splits an image into several frequency components maintaining the information of location. The components of two images from different modalities are merged by using the mean, the minimum or the maximum of the voxel elements. The application of the inverse DWT yields the fused image.
Results: Higher precision of a registration model is obtained by increasing the number of input images. Images that are unknown to the trained model can be registered as well, measuring a slight decrease in the DSC. The fused CT-MRI scans have the advantage of being single images with clearly visible bone structures, soft tissue and tumors.
Conclusion: This work has great potential to support therapy planning and aims at being used in practice. After image registration and fusion are fully investigated and more images are included, the results will be evaluated regarding the application in radiotherapy.
Funding Support, Disclosures, and Conflict of Interest: The presented study is supported by the MERCUR foundation (grant number St-2019-0007).