Click here to

Session: John R. Cameron Early-Career Investigator Symposium [Return to Session]

Latent Space Arc Therapy Optimization

N Bice1*, N Kirby1, D Nguyen2, C Kabat1, P Myers1, N Papanikolaou1, M Fakhreddine1, (1) UT Health San Antonio MD Anderson Cancer Center, San Antonio, Texas, (2) UT Southwestern Medical Center, Dallas, TX.

Presentations

MO-EF-TRACK 4-7 (Monday, 7/26/2021) 3:30 PM - 5:30 PM [Eastern Time (GMT-4)]

Purpose: Volumetric modulated arc therapy (VMAT) planning is a challenging problem in high-dimensional, non-convex optimization. Current arc therapy optimization algorithms favor solutions near the initialization point and are slower than necessary due to plan overparameterization. In this work, we reduce the effective dimension of arc therapy plans with unsupervised deep learning and navigate the learned low-dimensional coordinates during VMAT optimization.

Methods: We collected a dataset of 1,874 clinically delivered VMAT arcs from historical treatments documented in ElektaVersaHD log files. Multileaf collimator (MLC) leaf positions are represented at 80 evenly-spaced control points for 80 leaves on 2 banks (12,800-dimensional parameterization) over the duration of the arc. A family of 2D convolutional variational autoencoders (VAE) is used to reduce the dimension of the arc data (embedding dimensions d ∈ {32,64,128}). For comparison with linear machine learning dimensionality reduction, principal component analysis (PCA) is used. Trained models are used to search for held-out arcs with a leaf-position-based objective function and simulated annealing. For comparison with a traditional routine, stochastic global direct aperture optimization with geometry-based segment initialization is implemented.

Results: VAEs can compress arcs into 32, 64, and 128 dimensions and reconstruct them with median absolute leaf position errors of 1.59, 1.43, and 1.58 mm, respectively. PCA with the same embedding dimensions returns median errors of 7.99, 6.83, and 5.55 mm. When allowed the same number of maximum iterations, we find that the latent space algorithms converge in significantly fewer iterations compared to traditional methods. Given the same optimization problem, latent space optimization approaches a solution significantly faster (2.5 minutes) compared to conventional DAO (7.5+ minutes).

Conclusion: Learned latent spaces provide a setting for efficient VMAT optimization. While global DAO can reach lower objective scores given enough time, optimization in learned latent spaces can achieve similar accuracy much faster.

Handouts

    Keywords

    Computer Vision, Image Processing, Intensity Modulation

    Taxonomy

    TH- External Beam- Photons: IMRT/VMAT dose optimization algorithms

    Contact Email

    Share: