Click here to

Session: [Return to Session]

Predicting Later Time Frame Images in Dynamic PET Using Deep Learning

M Mokri1*, M Safari2, J Dasilva3, S Kaviani4, L Archambault5, J Carrier6, (1) Faculty of Medicine, University of Montreal, Montreal, QC, CA, (2) Departement De physique, De Genie Physique Et d'optique, Laval University,Quebec, QC, CA ,,,(3) Departement De radiologie, radio-oncologie Et Medecine nucleaire, University of Montreal, Montreal, QC, CA ,,(4) Faculty of Medicine, University of Montreal, Montreal, QC, CA,,,(5) Departement De physique, De Genie Physique Et d'optique, Laval University, Quebec, QC, CA (6) Departement De physique, University of Montreal, Montreal, QC, CA

Presentations

PO-GePV-I-76 (Sunday, 7/10/2022)   [Eastern Time (GMT-4)]

ePoster Forums

Purpose: Dynamic PET/CT is a sensitive imaging modality that maps out radiopharmaceutical biodistribution's spatial and temporal pattern, with the possibility of quantitative analysis. Despite the high potential of dynamic PET/CT imaging to provide pharmacokinetic analysis, lengthy scanning time hinders its clinical application. This study aimed at applying 2D-Unet to synthesize the second half of dynamic PET images using the only first half of the time frame images.

Methods: In this study, nine dynamic [18F]-FDG PET/CT mouse images were acquired, from which seven were assigned for training. The imaging acquisition was performed for 26 minutes, and the image size was 212*212. We used 2D U-net architecture with skip connection. In the encoder part, convolution layers and in the decoder part transposed convolution layers both with kernel size 4, stride 2, padding 1 is used. To stabilize the training, we used batch normalization followed by a ReLU activation function. The model was trained using the SGD optimizer with a learning rate of 2e-4 and 500 epochs. In addition, we employed the L1 loss function

Results: After 500 epochs, the loss for training and testing datasets declined from 0.55 and 0.27 to 0.18 and 0.15, respectively. The losses curves show that despite the continuous reduction of training loss, testing loss remained constant after epoch 200. That might be attributed to the small cohort population. However, our network was robust to overfitting

Conclusion: This study shows the potential use of deep learning in synthesizing all frames of dynamic PET images from only half-time points images. As a result, this method reduces the imaging time and improves the throughput. However, its clinical application needs more data. Therefore, we will use this technique to synthesize more animal and human data in the future

Keywords

Not Applicable / None Entered.

Taxonomy

Not Applicable / None Entered.

Contact Email

Share: