Click here to

Session: Imaging: AI in Imaging [Return to Session]

Super-Resolution CT Image Via Convolution Neural Network with An Observer Loss Function

M Yu*, M Han, J Baek, Yonsei University, Incheon, 28KR,

Presentations

SU-IePD-TRACK 1-7 (Sunday, 7/25/2021) 5:30 PM - 6:00 PM [Eastern Time (GMT-4)]

Purpose: To enhance the CT image resolution, we proposed a convolutional neural network (CNN) based super-resolution (SR) technique. To train SR network, we proposed a perceptual loss function that uses the classifier learned by CT images as a feature extractor (i.e., observer loss).

Methods: We used patient data authorized by Mayo Clinics for "the 2016 NIH-AAPM-Mayo Clinic Low Dose CT Grand Challenge". We used normal dose CT images of 10 patients. We acquired projection data using Siddon's algorithm in fan-beam CT geometry and applied 1x2 detector binning. High-resolution (HR) images and low-resolution (LR) images were generated by performing back-projection on original sinogram and binned sinogram, respectively. We implemented 13-layer CNN for CT classifier. The classifier was trained to classify lesion-present and lesion-absent. We randomly extracted 64x64 patches from the HR images and added Gaussian signal with various intensity, size, and location. After training, we extracted features from the 10-th convolution layer to calculate perceptual loss. For the SR network, we adapted RED-CNN based architecture, and bicubic interpolation was added at the beginning. We trained SR network to minimize the loss functions (i.e., mean-squared-error (MSE), ImageNet pretrained VGG, and proposed observer loss). We compared the results with different loss functions using peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and visual information fidelity (VIF).

Results: The quantitative evaluation results are best when using MSE loss but blurring occurs in the result images. On the other hand, when training the SR network using perceptual loss functions, structural details are preserved. In particular, the SR network with the observer loss represents more detailed textures than the network with VGG loss.

Conclusion: In this work, we proposed observer loss to train SR network for CT images. SR network with proposed observer loss could preserve structural details and textures better than that with other loss functions.

ePosters

    Keywords

    CT, Texture Analysis, Resolution

    Taxonomy

    IM- CT: Machine learning, computer vision

    Contact Email

    Share: