Purpose: The adoption of AI and Machine Learning techniques is greatly improving planning efficiency. Modeling these algorithms can involve numerous iterations in which the dose calculations can become a bottleneck. The reliance on commercial treatment planning systems introduces unnecessary overhead due to the constant flow of data. Here we present and investigate the effect of kernel truncation and sparsity on the dose and fluence calculation for the purposes of designing a dose engine for AI/ML algorithms.
Methods: The basis for this algorithm relies on beamlet and voxel discrimination in which numerous beamlets and voxels are pruned. This results in a constant time reduction in computation time and storage. The resulting deposition matrix is now highly sparse and can be treated and manipulated as such. We compared our dose calculation against calculations in both a water phantom and patient anatomy geometry to the AAA algorithm in Eclipse, the standard clinical dose engine. Our fluence optimization algorithm was tested on the TG-119 prostate phantom. Our system was developed using the Julia programming language.
Results: The dose calculation algorithm had errors of less than 1% for field sizes up to 30x30 cm2. Comparing dose-volume objectives the optimizer produced a distribution that differed from the constraints on average by 0.1% for the PTV, 2.9% for the rectum, and 2.85% for the bladder. For beamlet resolutions common in multi-beam treatments, the calculation had a performance around 17 times faster than a full dense calculation on the same hardware.
Conclusion: The sparsity and truncation greatly improved computational efficiency while maintaining accuracies needed to be introduced in AI/ML applications. This improvement will greatly aid in the development cycle for AI/ML applications in radiation therapy.