Purpose: Automatic labeling of vertebrae in CT is a valuable capability for more quantitative diagnostic analysis, interventional planning, and an independent check against wrong-level surgery or radiosurgery. We propose a novel neural network approach (Ortho2D) that operates on 2D orthogonal slices of a 3D image to aggregate, localize, and classify features of interest, thereby avoid memory bottlenecks associated with fully 3D networks and enabling operation on higher resolution datasets.
Methods: Ortho2D uses two independent Faster R-CNN networks to label vertebrae in sagittal and coronal CT slices. The centroids of 2D detection bounding boxes are clustered in 3D to detect vertebrae centroids, identify the anatomical region (cervical, thoracic, lumbar, or sacral), and identify the individual vertebral levels therein. A post-process sorting method incorporates the confidence in network output to refine classifications and reduce outliers. Ortho2D was evaluated on a publicly available dataset containing 302 normal and pathological spine CT images. The labeling accuracy and memory requirements were assessed in comparison to other methods recently reported in literature. The approach was extended to higher resolution CT data to investigate potential improvements in accuracy.
Results: Ortho2D achieved vertebrae detection accuracy of 97.1%, region identification accuracy of 94.3%, and individual vertebral level identification accuracy of 91.0%. Ortho2D met or exceeded the performance of previously reported 2D and 3D labeling methods and reduced memory consumption by a factor of ~50 (at 1 mm voxel size) compared to a 3D U-Net. Extension of Ortho2D to higher resolution datasets than normally afforded by fully 3D networks demonstrated a 15% boost in labeling accuracy by virtue of improved memory efficiency and the ability to operate on high-resolution data.
Conclusion: The Ortho2D method achieved vertebrae labeling performance comparable to other recently reported methods with significant reduction in memory consumption, permitting further performance boosts via application to high-resolution CT.
Funding Support, Disclosures, and Conflict of Interest: This research was supported by academic-industry partnership with Medtronic (Littleton MA, USA).