Click here to

Session: Data Science, Radiomics, and Computing [Return to Session]

Computer-Assisted Diagnosis of Hepatic Portal Hypertension: A Novel, Attention-Guided Deep Learning Framework Based On CT Imaging and Laboratory Data Integration

Y Wang*, X Li, M Konanur, B Konkel, E Seyferth, N Brajer, M Bashir, K Lafata, Duke University, Durham, NC

Presentations

SU-E-TRACK 6-2 (Sunday, 7/25/2021) 3:30 PM - 4:30 PM [Eastern Time (GMT-4)]

Purpose: To develop a deep learning framework that integrates CT imaging with laboratory data to predict the hepatic venous pressure gradient (HVPG) of patients at risk of portal hypertension (PHTN).

Methods: We retrospectively identified 198 patients with CT imaging, laboratory results, and HVPG measurements. PHTN was defined as HVPG≥5 mmHg. Laboratory results included albumin, platelet count, and APRI, all of which are associated with the pathogenesis of PHTN. We designed a novel attention-guided deep learning approach that targets the left hepatic lobe surface nodularity, which has been linked to HVPG. The liver was segmented using a self-adapting nnU-Net model, from which the liver tip and center-of-mass were detected as anatomic landmarks. The surface of the left hepatic lobe was then detected using a rotationally-invariant bounding box technique to partition the liver relative to its tip and center-of-mass. Surface nodularity variation in the superior-inferior direction was captured by incorporating adjacent CT slices. The resulting 3-channel image of the left hepatic lobe’s surface served as input to a ResNet-152 deep learning architecture trained to predict PHTN. To incorporate laboratory data, positional encoding was used to transcribe raw lab results into a high-dimensional feature-space, which was concatenated to the fully connected layer of the ResNet-152 model prior to PHTN prediction. Performance of the joint-model (imaging+laboratory) was compared to both the imaging-only and laboratory-only models.

Results: The joint-model consisting of both imaging data and laboratory data achieved an AUC of 0.74 and an accuracy of 0.71, resulting in an increase in model performance relative to both the imaging-only model (AUC=0.60; accuracy=0.62) and the laboratory-only model (AUC=0.58; accuracy=0.60).

Conclusion: A novel deep learning paradigm was developed to predict PHTN based on the integration of CT imaging and laboratory results. Our results suggest that CT imaging and laboratory data provide complementary diagnostic information.

Handouts

    Keywords

    CAD, CT, Computer Vision

    Taxonomy

    IM/TH- Image Analysis (Single Modality or Multi-Modality): Computer-aided decision support systems (detection, diagnosis, risk prediction, staging, treatment response assessment/monitoring, prognosis prediction)

    Contact Email

    Share: