Click here to

Session: Data Science Robustness, Performance, and Data Harmonization [Return to Session]

Deep Fusion of CT Imaging and Blood Markers to Estimate Portal Venous Hypertension: Novel Optimization of Data Fusion Quality

Y Wang*, X Li, M Konanur, B Konkel, E Seyferth, N Brajer, J Liu, M Bashir, K Lafata, Duke University, Durham, NC

Presentations

SU-H430-IePD-F6-1 (Sunday, 7/10/2022) 4:30 PM - 5:00 PM [Eastern Time (GMT-4)]

Exhibit Hall | Forum 6

Purpose: Develop a framework to quantify optimal data fusion of CT imaging with hepatic blood markers during deep learning prediction of portal venous hypertension.

Methods: We developed a novel approach to quantify the quality of pixel data fusion with other clinically relevant data sources in deep learning problems. Our approach is based on modeling the fully connected layer (FCL) as a potential function whose distribution takes the form of the classical Gibbs measure. The features of the FCL are modeled as random variables governed by state functions, which are interpreted as the different data sources to be fused. The contribution on each source, relative to the total input distribution of the FCL, represents a quantitative measure of source-bias. To minimize this source-bias, we implement a vector-growing encoding scheme, known as positional encoding, where the low-dimensional blood markers are transcribed into a rich feature space that complements the high-dimensional imaging features. To test our approach, we developed a Res-Net-152 based deep learning model to predict hepatic portal hypertension via the fusion of CT imaging and hepatic blood markers. These two data sources were processed in parallel and then fused into a single FCL. We optimized the quality of the fusion process according to our mathematical formalism and experimentally compared the effect of different fusion parameterizations on model performance.

Results: The fused model consisting of both imaging data and positionally encoded blood markers at the theoretically optimal fusion quality metric achieved an AUC of 0.74 and an accuracy of 0.71. This model was statistically better than the imaging-only model (AUC=0.60; accuracy=0.62), the blood marker-only model (AUC=0.58; accuracy=0.60), and sub-optimized fusion models.

Conclusion: Our results suggest that CT imaging and hepatic blood markers can provide complementary information, but only if they are fused appropriately within the deep learning architecture.

Keywords

Computer Vision, CT, Signal Processing

Taxonomy

IM/TH- Image Analysis (Single Modality or Multi-Modality): Computer/machine vision

Contact Email

Share: