Click here to

Session: Multi-Disciplinary General ePoster Viewing [Return to Session]

Evaluating the Use of Software Parallelism for DVH Calculations

W Sleeman, P Turner, P Bose, S Srinivasan, P Ghosh, J Palta, R Kapoor*, Virginia Commonwealth University, Richmond, VA

Presentations

PO-GePV-M-49 (Sunday, 7/10/2022)   [Eastern Time (GMT-4)]

ePoster Forums

Purpose: To evaluate the potential benefit of software parallelism for calculating DVHs.

Methods: A DVH calculation engine was modified to add support for both CPU multi-threading and GPU based acceleration. Dose and structure set DICOM-RT files from 100 patients were selected, resulting with an average of 19 structures per patient. DVHs were calculated for each structure using traditional serial processing, OpenMP mutli-threading, GPU processing with CUDA, and a combination of OpenMP and CUDA. Experiments were performed using an Intel i7700k CPU and a nVidia 2080Ti GPU.

Results: The average DVH running time per patient for each level of parallelism was 3.022s for serial, 1.663s for OpenMP, 3.141s for CUDA, and 1.675s for OpenMP plus CUDA. These experiments also produced similar run times for the file reading and preparation of the dose and structure set files which ranged from 0.75 to 0.81 seconds.

Conclusion: This work demonstrated that DVH calculations can benefit from software parallelism, primarily from CPU multi-threading. GPU acceleration increased performance for some individual structures but did not always outweigh the additional overhead. While the ratio of processed voxels to run time was constant for most structures, these ratios got worse for non-anatomical structures such as fiducials (small) and CouchSurface (large). Since the DVH values for these structures do not have a clinical significance, they should be automatically excluded from such calculations to speed up real-time or batch processing. Over 30% of the time was also spent loading and preparing the data during the faster threading based experiments. To fully maximize parallel computing performance for DICOM or DICOM-RT data processing, the overhead of reading and preparing these files needs to be addressed. Future solutions may include the use of distributed file systems such as the Hadoop distributed file system (HDFS) or index files that could supplement the DICOM standard.

Keywords

Dose Volume Histograms, Parallel Computing

Taxonomy

Not Applicable / None Entered.

Contact Email

Share: