Click here to

Session: [Return to Session]

A Deep Learning Fiducial Marker Detection Algorithm for Beam's-Eye-View Marker Tracking for Liver Cancer Patients

D Chrystall1*, E Hewson2, C Sengupta2, A Mylonas2, T Wang3, R O'Brien2, Y Lee4, P Poulsen5, D Nguyen2, P Keall2, J Booth1, (1) Northern Sydney Cancer Centre, St Leonards, NSW, AU, (2) The University of Sydney, Sydney, NSW, AU, (3) Western Sydney Local Health District, NSW, AU, (4) Princess Alexandra Hospital, Qld, AU,(5) Aarhus University Hospital, Aarhus N, DK,

Presentations

MO-H345-IePD-F5-5 (Monday, 7/11/2022) 3:45 PM - 4:15 PM [Eastern Time (GMT-4)]

Exhibit Hall | Forum 5

Purpose: The MV imager is an ideal real-time intrafraction motion management tool that adds no additional imaging dose, provides motion data in the frame of reference of the treatment beam, and comes with any standard-equipped linear accelerator. This study extends a deep learning framework that automatically detects fiducial markers to track intrafraction liver motion using MV images and evaluates the performance of the fully-trained convolutional neural network (CNN) classifier in segmenting liver fiducial markers.

Methods: A CNN classifier was trained using MV images from 7 liver cancer patients (30 fractions) with implanted fiducial markers. The classifier was made of four convolutional layers and one fully connected layer. The CNN performance and marker tracking system accuracy were validated on unseen MV images from 7 patients (13 fractions). The classifier performance was evaluated using the Precision-Recall Curve (PRC), the Area Under the Curve (AUC), sensitivity and specificity. The tracking system accuracy was evaluated by calculating the geometric error in BEV co-ordinates in the x and y-directions, using the ground truth from manually labelled images.

Results: The CNN classifier had an AUC of 0.98, sensitivity of 97.94% and specificity of 99.71%. The overall geometric tracking error (mean ± standard deviation [1ˢᵗ, 99ᵗʰ percentile]) was 0.0 ± 0.6 [-1.6, 1.6] mm and 0.0 ± 0.6 [-0.9, 1.1] mm in the x and y-directions, respectively. For frames that were manually labelled the CNN successfully tracked at least one marker in 49% (range = 22–97%) of frames for EBH patients and 15% (range = 0–66%) for FB patients.

Conclusion: The first deep learning method of tracking liver fiducial markers on MV images was developed and evaluated on an unseen patient dataset. The high performance of the CNN classifier and sub-mm tracking accuracy show this is a feasible method for tracking liver fiducial markers using MV images.

Funding Support, Disclosures, and Conflict of Interest: Data acquired from ethics approved TROG 17.03 LARK Trial for liver SABR data (ID: NCT02984566). Funding support from the NSW Cancer Council Grant and the Cancer Australia Grant.

Keywords

Target Localization, Image-guided Therapy, Megavoltage Imaging

Taxonomy

TH- External Beam- Photons: onboard imaging (development and applications)

Contact Email

Share: