Click here to

Session: Deep Learning for Image-guided Therapy [Return to Session]

Realistic Respiratory Motion Simulation Using Deep Learning

D Lee*, S Nadeem, Y Hu, Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY

Presentations

WE-C1030-IePD-F2-3 (Wednesday, 7/13/2022) 10:30 AM - 11:00 AM [Eastern Time (GMT-4)]

Exhibit Hall | Forum 2

Purpose: Simulating realistic 4D respiratory motion is challenging given the complex anatomy and large variability in respiratory patterns. Commercial simulation software can generate respiratory motion but is constrained by limited transformation and hence is not realistic. We proposed a novel deep learning framework for patient-specific realistic respiratory motion simulation by predicting multiple deformation vector fields (DVFs) at different respiratory phases from an initial phase CT image.

Methods: The 140 patients from an internal 4D-CT dataset (10 phases) were retrospectively studied; 100 were used for training and 40 for testing. Our deep learning model consists of the two modules: Seq2Seq and Voxelmorph. The Seq2Seq module was built by stacked convolutional long short-term memory (ConvLSTM.) The Seq2Seq takes an inhale-phase CT image and the corresponding 1D breathing trace to predict images for the later 9 phases. We used diaphragm displacements across the phases as the 1D respiration surrogate. The Voxelmorph then takes the initial phase image and the predicted phase images to generate DVF for each phase representing the respiration motion. We warped the initial phase with the predicted DVFs to other phases and compared them to the ground truth for evaluation. We used peak signal-to-noise ratio (PSNR) and structure-similarity-index measure (SSIM) for the comparison.

Results: For the 3 exhale-phases containing the largest deformation from an initial inhale-phase image, the average PSNR of 40 test cases is improved from 28.12±3.91 to 30.69±3.29 and the average SSIM is greatly improved from 0.83±0.09 to 0.90±0.04.

Conclusion: We proposed a novel deep learning framework for simulating realistic patient-specific respiratory motion. The proposed framework predicts DVFs across the breathing phases from a single-phase CT image with a patient-specific breathing trace. The predicted DVFs can serve as ground truth to validate deformable image registration and are suitable for on-line tumor tracking.

Funding Support, Disclosures, and Conflict of Interest: This project was supported by MSK Cancer Center Support Grant/Core Grant (P30 CA008748).

Keywords

Respiration

Taxonomy

IM/TH- Image Analysis (Single Modality or Multi-Modality): Machine learning

Contact Email

Share: