Click here to

Session: Multi-Disciplinary General ePoster Viewing [Return to Session]

Deep-Learning Based Auto-Classification of Anatomical Regions for IGRT Applications: Implementation and Explainability

J Cruz Bastida*, S Li, E Pearson, H Al-Hallaq, The University of Chicago, Chicago, IL


PO-GePV-M-121 (Sunday, 7/25/2021)   [Eastern Time (GMT-4)]

Purpose: Anatomical classification from single x-ray projections could help automate the selection of scan and reconstruction parameters in cone-beam CT (CBCT) for image-guided radiotherapy (IGRT). Here, deep learning (DL) methods are used for classification and the results are interpreted with the aid of visual explainability tools.

Methods: Projection data from 1055 patients with multiple CBCT scans were used to train and test a DL model. Each scan was manually classified into five categories using projections at 45° intervals: abdomen, head, neck, pelvis and thorax. The dataset was limited to no more than 2-10 scans per patient, such that anatomical classes were balanced. Data were distributed for training (75%), testing (12.5%), and evaluation (12.5%) with no overlap between patients among the 3 sets. A VGG-16 neural network (NN) with pre-trained weights (imageNET dataset) was used; NN weights were adjusted by re-training the model with projections of a given angle (0°, 90°, 180°, and 270°). The classification performance was quantified in terms of precision and accuracy. Gradient-weighted Class Activation Maps (Grad-CAM) were reviewed to identify strengths and limitations of the proposed DL model.

Results: The highest performance was achieved using the anterior-posterior projection (90°), with 92% accuracy. Precision was high for head, neck and pelvis (>97%) but low for abdomen and thorax (87% and 76%, respectively) classes. Thorax projections were commonly misclassified as abdomen, and vice versa. Manual review of Grad-CAM results suggested that abdomen/thorax misclassifications were due to either large or insufficient presence of lung tissue. For all other classes, Grad-CAM showed a clear emphasis on boney anatomical features (e.g., iliac crest, mandible).

Conclusion: A DL model was proposed and validated for the auto-classification of anatomical regions from single x-ray projection images. By using Grad-CAM it was possible to identify possible sources of class confusion in order to improve future models.

Funding Support, Disclosures, and Conflict of Interest: Funding was provided by Varian Medical Systems



    Cone-beam CT, Image-guided Therapy


    IM/TH- Cone Beam CT: Machine learning, computer vision

    Contact Email