Click here to

Session: [Return to Session]

One-Second Into the Future: A Deep Learning Method to Predict 3D Lung Cancer Target Motion to Account for Adaptation Latency

Q Hoang1, J Booth2, V Caillet2,3, P Keall3, D Nguyen2,3,4*, (1) School Of Biomedical Engineering, University Of Sydney, Camperdown , Australia,(2) Royal North Shore Hospital, Sydney, NSW, AU, (3) ACRF Image X Institute University of Sydney, Camperdown,NSW ,Australia, (4) University of Technology Sydney, Ultimo, NSW, AU

Presentations

SU-F-TRACK 6-4 (Sunday, 7/25/2021) 4:30 PM - 5:30 PM [Eastern Time (GMT-4)]

Purpose: To develop an intrafraction 3D motion prediction algorithm to compensate for latency of up to one second between motion observation and adaptation in real-time IGRT systems. The AAPM TG264 recommended an end-to-end latency of less than 0.5 seconds for MLC tracking for real-time adaptation of intrafraction motion. This is hard to achieve when current real-time imaging systems has 200-500 ms between images (MRI linac, Varian and Elekta intrafraction fluoroscopy), hence the need for motion prediction.

Methods: A motion prediction system based on a Long-Short-Term-Memory (LSTM) network topology for signal forecasting was developed. A separate network was trained for each motion axes of Left-Right (LR), Superior-Inferior (SI) and Anterior-Posterior (AP). Intrafraction motion acquired with Calypso from 7 lung cancer was used. Each patient received either 4 for 5 fractions, resulting in 29 fractions and 15 hours of recording in total. The network was trained with two different training configurations: with 2-minutes and 4-minutes of motion for each patient. The LSTM weights were updated when a new target position was observed. The prediction interval values addressed were 0.5s and 1s. The neural network’s prediction was compared against the actual motion.

Results: With 2-minutes training, the prediction error was 0.04±1.13mm, 0.02±1.18mm and -0.05±1.28mm for 0.5s prediction and -0.05±1.12, 0.04±1.27mm and -0.05±1.30mm for 1s prediction in LR, SI and AP, respectively. The error reduced when the training data increases to 4 minutes for both 0.5s and 1s prediction. The root-mean-square-error was <1mm for all motion axes. The range of motion in the tested patient database was [-8.3, 11.3]mm, [-34.8, 7.7]mm and [-16.1, 9.3]mm in LR, SI and AP.

Conclusion: A deep learning prediction algorithm was developed and tested for 3D lung tumour motion. Up to 1 second forecasting was achieved with sub-mm accuracy, enabling compensation of latency in motion adaptation systems in real-time.

Funding Support, Disclosures, and Conflict of Interest: D T Nguyen is funded by a Cancer Institute NSW ECR Fellowship and a NHMRC ECR Fellowship. P Keall is funded by an NHMRC Investigator grant (L3)

Handouts

    Keywords

    3D, MLC, DMLC

    Taxonomy

    TH- External Beam- Photons: Motion management - intrafraction

    Contact Email

    Share: