Skip to main content
Enterprise AI Analysis: A Step towards Interpretable Multimodal AI Models with MultiFIX

Multimodal AI Interpretability

A Step towards Interpretable Multimodal AI Models with MultiFIX

Real-world problems are often dependent on multiple data modalities, making multimodal fusion essential for leveraging diverse information sources. In high-stakes domains, such as in healthcare, understanding how each modality contributes to the prediction is critical to ensure trustworthy and interpretable AI models. We present MultiFIX, an interpretability-driven multimodal data fusion pipeline that explicitly engineers distinct features from different modalities and combines them to make the final prediction. Initially, only deep learning components are used to train a model from data. The black-box (deep learning) components are subsequently either explained using post-hoc methods such as Grad-CAM for images or fully replaced by interpretable blocks, namely symbolic expressions for tabular data, resulting in an explainable model. We study the use of MultiFIX using several training strategies for feature extraction and predictive modeling. Besides highlighting strengths and weaknesses of MultiFIX, experiments on a variety of synthetic datasets with varying degrees of interaction between modalities demonstrate that MultiFIX can generate multimodal models that can be used to accurately explain both the extracted features and their integration without compromising predictive performance.

Executive Impact: Enabling Trustworthy Multimodal AI

MultiFIX pioneers a critical pathway for enterprise AI by delivering models that are not only accurate but also fully interpretable. This is essential for high-stakes decisions, regulatory compliance, and fostering user trust in complex, data-rich environments.

93.9% Peak BAcc (Synthetic AND)
+0.035 BAcc Gain (XOR)
2 Modalities Evaluated (Image & Tabular)
6 Strategies Training Approaches

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Overview
Methodology
Experimental Results
Discussion & Conclusion

MultiFIX is a novel interpretability-driven multimodal data fusion pipeline. It aims to leverage diverse information sources from multiple data modalities, which is crucial for real-world problems, especially in high-stakes domains like healthcare where understanding predictions is paramount for trust and interpretability. The framework engineers distinct features from different modalities and combines them for final predictions.

It uniquely integrates powerful Deep Learning (DL) for feature extraction with Genetic Programming (GP) to generate interpretable symbolic expressions. This allows for both black-box explanation via methods like Grad-CAM for images and full replacement of components with inherently interpretable blocks for tabular data.

The MultiFIX pipeline employs sparse feature engineering, limiting features to a small number (e.g., at most three per modality) to enhance interpretability. For images, CNNs (like pre-trained ResNet or autoencoders) are used. For tabular data, Multi-Layer Perceptrons (MLPs) are employed. An intermediate fusion strategy combines these engineered features.

Six training strategies are explored: End-to-end, Sequential with AE weights, Sequential with AE weights and De-freezing, Sequential with single modality weights, Hybrid with AE weights, and Hybrid with single modality weights. Interpretability is achieved by using Grad-CAM for image features and GP-GOMEA for tabular features and the final fusion block, replacing NN counterparts with symbolic expressions.

Experiments on synthetic datasets (AND, XOR, Multifeature, Multiclass problems) with varying inter-modality dependencies demonstrate MultiFIX's ability to generate accurate and explainable multimodal models. Multimodal approaches consistently outperform single-modality baselines. While performance differences between various multimodal training strategies are often not statistically significant, end-to-end and hybrid approaches generally show better average BAcc values.

Crucially, MultiFIX provides interpretable models where individual component contributions (e.g., image features via Grad-CAM, tabular features via symbolic expressions, and fusion logic via symbolic expressions) can be analyzed. This allows for verification of learned relationships, even identifying equivalent but inverted intermediate features, enhancing trust and transparency.

MultiFIX offers a unique step towards interpretable multimodal learning by enforcing a small number of learned features and combining DL with GP for inherent interpretability. The approach can generate interpretable models without sacrificing performance, especially for problems with tabular data or heterogeneous modalities. It reduces misleading explanations by directly generating interpretable models, unlike post-hoc methods.

Limitations include the current focus on a limited set of modalities and synthetic problems. Future work will involve real-world datasets, handling more complex intermodal relationships, and improving interpretability by minimizing feature complexity and enhancing visualization tools for user-friendly analysis.

93.9% Achieved Balanced Accuracy on AND Problem

MultiFIX Pipeline for Interpretable Multimodal AI

Multimodal Dataset
Feature Engineering Blocks (DL)
Concatenate Features
Fusion Block (DL)
Prediction (DL)
Replace DL with GP-GOMEA/Grad-CAM
Interpretable Model

Comparison of Training Strategies (Balanced Accuracy)

Strategy AND Problem XOR Problem Multifeature Problem Multiclass Problem
Image Only 0.607 ± 0.041 0.502 ± 0.027 0.655 ± 0.006 0.485 ± 0.004
Tabular Only 0.693 ± 0.034 0.552 ± 0.020 0.674 ± 0.028 0.453 ± 0.014
End-to-End 0.939 ± 0.029 0.899 ± 0.023 0.798 ± 0.044 0.823 ± 0.038
Hybrid Single 0.923 ± 0.014 0.918 ± 0.017 0.799 ± 0.054 0.919 ± 0.007
Higher Balanced Accuracy indicates better performance. MultiFIX consistently outperforms single-modality baselines.

Interpretable Model Generation Example

Scenario: MultiFIX generates interpretable models by replacing deep learning components with GP-GOMEA symbolic expressions for tabular data and using Grad-CAM for image features. For instance, in the AND problem, an image feature (I2) shows high correlation with the presence of a circle (IGT) via Grad-CAM heatmaps, while a tabular feature (T1) is derived from a piecewise symbolic expression (inversely correlated with x1 > x2). The final prediction (Ypred) is a symbolic expression combining these intermediate features. This transparency reveals how the model reached its decision, even if intermediate features are inverted yet equivalent to the ground truth.

Outcome: The interpretable model’s predictive power is often very close to, or sometimes even higher than, the black-box DL model, while providing full transparency into its decision-making process, enabling human verification and trust.

Key Performance & Explainability Metrics

0.006 BAcc Difference (Interpretable vs. DL, AND Problem)
+0.035 BAcc Increase (Interpretable vs. DL, XOR Problem)
3 Max Features per Modality
6 Training Strategies Explored
4 Synthetic Problems Evaluated

Quantify Your AI Impact

Estimate the potential efficiency gains and cost savings from implementing interpretable multimodal AI in your enterprise.

Estimated Annual Savings $0
Hours Reclaimed Annually 0

Your Roadmap to Interpretable AI

Our structured approach ensures a seamless integration of advanced AI, tailored to your enterprise's unique needs, driving measurable results with transparency.

Phase 1: Discovery & Strategy

We begin with an in-depth analysis of your current data landscape, existing decision-making processes, and business objectives. This phase defines the scope, identifies key modalities, and outlines a bespoke strategy for interpretable AI implementation.

Phase 2: MultiFIX Model Development

Leveraging the MultiFIX framework, we engineer custom feature extraction and fusion models. Our focus is on maximizing predictive performance while ensuring full interpretability at both the feature and prediction levels using a blend of Deep Learning and Genetic Programming.

Phase 3: Integration & Validation

The developed interpretable models are seamlessly integrated into your existing enterprise systems. Rigorous validation ensures accuracy, robustness, and that the explanations provided are clear, actionable, and align with domain expert understanding.

Phase 4: Training & Optimization

We provide comprehensive training for your team on how to interpret, utilize, and manage the new AI models. Continuous monitoring and iterative optimization ensure long-term performance, adaptability, and sustained business value.

Ready to Build Transparent AI?

Book a personalized consultation to explore how MultiFIX can bring clarity, trust, and superior performance to your enterprise's multimodal AI initiatives.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking