Skip to main content
Enterprise AI Analysis: Large Language Models Meet Text-Centric Multimodal Sentiment Analysis: A Survey

Research Paper Analysis

Large Language Models Meet Text-Centric Multimodal Sentiment Analysis: A Survey

Compared to traditional sentiment analysis, which only considers text, multimodal sentiment analysis needs to consider emotional signals from multimodal sources simultaneously and is therefore more consistent with the way how humans process sentiment in real-world scenarios. It involves processing emotional information from various sources such as natural language, images, videos, audio, physiological signals, etc. However, although other modalities also contain diverse emotional cues, natural language usually contains richer contextual information and therefore always occupies a crucial position in multimodal sentiment analysis. The emergence of ChatGPT has opened up immense potential for applying large language models (LLMs) to text-centric multimodal tasks. However, it is still unclear how existing LLMs can adapt better to text-centric multimodal sentiment analysis tasks. This survey aims to (1) present a comprehensive review of recent research in text-centric multimodal sentiment analysis tasks, (2) examine the potential of LLMs for text-centric multimodal sentiment analysis, outlining their approaches, advantages, and limitations, (3) summarize the application scenarios of LLM-based multimodal sentiment analysis technology, and (4) explore the challenges and potential research directions for multimodal sentiment analysis in the future.

Executive Impact & Key Takeaways

This survey highlights the transformative potential of Large Language Models (LLMs) and Large Multimodal Models (LMMs) in text-centric multimodal sentiment analysis. Integrating diverse modalities, particularly natural language, offers a more comprehensive understanding of human emotions in real-world scenarios.

LLM Parameters
Zero-Shot Performance Relative to SOTA
Key Modalities Integrated (Text, Image, Audio, Video)
Main Fusion Strategies (Feature, Algorithm, Decision)

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Large Language Models (LLMs)
Large Multimodal Models (LMMs)
MSA Basic Concepts
Image-Text SA
Audio-Image-Text SA (Video)
Multimodal Sarcasm Detection

LLMs: Foundation of Advanced AI

Large Language Models (LLMs), characterized by hundreds of billions of parameters, are trained on vast text data. They demonstrate powerful abilities in understanding, generating natural language, and solving complex tasks. Key features include:

  • In-Context Learning (ICL): Generating expected output from natural language instructions and task demonstrations without additional training.
  • Instruction Following: Performing well on unseen tasks through fine-tuning on multi-task datasets formatted with natural language descriptions.
  • Step-by-step Reasoning (Chain-of-Thought - CoT): Solving complex tasks by leveraging intermediate reasoning steps to derive final answers.

LLMs are utilized through two main paradigms: Parameter-frozen application (zero-shot and few-shot learning) and Parameter-tuning application (full-parameter and parameter-efficient tuning).

LMMs: Bridging Modalities for Deeper Understanding

Large Multimodal Models (LMMs) extend LLMs by integrating various data types such as text, images, audio, and video. They enable a more comprehensive understanding and generation of diverse content by creating a unified multimodal embedding space.

  • Multimodal Capabilities: Processing visual inputs, generating textual descriptions from images, and handling audio data.
  • Unified Embedding Space: Separate encoders for each modality generate data-specific representations, which are then aligned into a cohesive multimodal space, integrating and correlating information seamlessly.
  • Examples: Notable LMMs include Gemini, GPT-4V, ImageBind, BLIP-2, LLava, and Qwen-VL, showcasing zero-shot or few-shot context learning abilities.

Multimodal Sentiment Analysis: Core Challenges & Fusion

Multimodal Sentiment Analysis (MSA) combines multiple modalities (e.g., image, text, audio) to enhance sentiment classification accuracy, reflecting human emotion processing more closely. Unlike text-only scenarios, MSA presents unique challenges:

  • Complexity of Sentiment Semantic Representation: Sentiment semantics derived from various modality representations are complex to select and fuse.
  • Complementarity of Sentiment Elements: Other modalities can provide effective supplements to often shorter, less informative textual expressions.
  • Inconsistency in Sentiment Expression: Conflicts in sentiment expressions among modalities (e.g., irony) require careful disambiguation.

The core of MSA involves independent representation of single-modal semantics and their fusion. Fusion types include Feature Layer Fusion (Early Fusion), Algorithm Layer Fusion (Model-level Fusion), and Decision Layer Fusion (Late Fusion).

Image-Text Sentiment Analysis: From Coarse to Fine-Grained

This category focuses on sentiment analysis from image-text pairs, primarily in social media and e-commerce contexts. It is divided into coarse-grained and fine-grained levels.

Coarse-grained Level:

Involves sentiment classification (positive, neutral, negative) and emotion classification (happy, sad, angry, etc.) at a sentence level. Early models were feature-based, evolving to neural network models using attention mechanisms. LLMs enhance this by leveraging contextual world knowledge (e.g., WisdoM) and prompt-based fine-tuning (e.g., UP-MPF, PVLM). Datasets include MVSA, TumEmo, MEMOTION 2, MSED.

Fine-grained Level:

Focuses on analyzing finer sentiment elements like aspect term (a) and sentiment polarity (p). Subtasks include:

  • Multimodal Aspect Term Extraction (MATE): Extracting aspect terms from a sentence given an image.
  • Multimodal Aspect-based Sentiment Classification (MASC): Identifying sentiment polarity for a given aspect term.
  • Joint Multimodal Aspect-Sentiment Analysis (JMASA): Simultaneously extracting aspects and their polarities.
LLMs (e.g., A2II, GMP) are being adapted to these tasks to mitigate limitations of modality fusion and enhance performance.

Audio-Image-Text Sentiment Analysis (Video): Temporal Dynamics

Video-based sentiment analysis differs from image-text analysis by emphasizing facial expressions and body movements in video sequences, and considering temporal intra-modal emotional factors and alignment over time. Tasks include sentiment classification (3, 5, or 7 categories) and emotion classification (multi-label or single-label).

Methods focus on two core themes:

  • Cross-modal Sentiment Semantic Alignment: Explores associations between emotional information across modalities, analyzes alignment relationships, and reduces semantic distance. Approaches include attention-based alignment (e.g., MulT), contrastive learning-based alignment (e.g., CLIP-inspired methods), and cross-domain transfer learning-based alignment.
  • Multimodal Sentiment Semantic Fusion: Aggregates sentiment information to achieve comprehensive understanding. Methods include tensor-based fusion (e.g., LMF), fine-grained temporal interaction modeling fusion (e.g., RAVEN), and pre-trained model-based fusion (injecting multimodal information into language models).

Datasets include ICT-MMMO, IEMOCAP, CMU-MOSI/MOSEI, MELD, CH-SIMS, M3ED, MER2023/2024, EMER, CMU-MOSEAS, UR-FUNNY.

Multimodal Sarcasm Detection: Unveiling Irony

Multimodal sarcasm detection aims to identify sarcastic meaning in text associated with an image, crucial for disambiguating inconsistent sentiment signals across modalities. Sarcasm can involve either complete sentiment conflict or instances where some modalities convey 'neutral' sentiment while others are positive/negative (implicit sentiment expression).

Key methodologies include:

  • Multi-task frameworks: Simultaneously recognizing sarcasm and classifying sentiment polarity.
  • Incongruity detection: Capturing emotional semantic cues across modalities.
  • Attention mechanisms: BERT-based models with cross-modal and text-oriented co-attention.
  • Graph Neural Networks (GNNs): Building heterogeneous intramodal and cross-modal graphs to learn inconsistency relationships.
  • Prior knowledge integration: Utilizing knowledge bases like ConceptNet to determine image-text relevance.

LLMs (e.g., CofiPara) are leveraged using prompts to cultivate divergent thinking and elicit relevant knowledge for judging irony, viewing LMMs as modal converters to textualize visual information for cross-modal alignment. Datasets include MMSD, MMSD2.0, MUSTARD.

The Scale of Modern LLMs

175B+ Parameters in GPT-3, setting a new standard for AI capabilities.

Enterprise Process Flow: WisdoM Framework for Multimodal SA

1. Prompt Templates Generation
2. Context Generation (using LVLM)
3. Contextual Fusion (with MSA model)

Comparison of LLM/LMM-based Multimodal Sentiment Analysis Methods

Method Usage of LLMs Advantage Disadvantage
WisdoM [141] Zero-shot Learning: 1) Using ChatGPT to provide prompt templates. 2) Prompting LMMs to generate context using the prompt templates with image and sentence.
  • Leverages contextual world knowledge from LMMs for enhanced Image-text Sentiment Classification.
  • Potential for hallucination in LLMs, leading to inaccurate contextual knowledge.
  • Requires further exploration for adaptive context incorporation.
ChatGPT-ICL [171] Zero-shot Learning and Few-shot Learning: Using ChatGPT to predict final sentiment labels.
  • Explores ICL potential for Multimodal Aspect-based sentiment analysis.
  • Achieves competitive performance with smaller sample sizes.
  • Limited capability for aspect term extraction tasks compared to fine-tuned methods.
A2II [160] Full-Parameter Tuning: Leverages LMMs to alleviate cross-modal fusion limitations.
  • Explores instruction tuning approach for multimodal aspect-based sentiment classification.
  • Achieved impressive performance.
  • Visual features extracted by Q-Former (query-based) may be mismatched, neglecting some visual emotional signals.
CofiPara [207] Zero-shot Learning: Uses potential sarcastic labels as prompts to cultivate divergent thinking in LMMs, eliciting relevant knowledge for judging irony.
  • Addresses negative impact of noise in LMMs using competitive principles to align sarcastic content.
  • Views LMMs as modal converters to transform visual info into text for cross-modal alignment.
  • Reliance on LMMs' capabilities as knowledge source introduces dependency.
  • Erroneous judgments due to LMM noise/hallucinations persist despite measures.

Case Study: Enhancing Smart Home Interactions with Multimodal AI

Challenge: Traditional smart home systems lack true emotional intelligence, limiting natural human-machine interaction and user-centric adaptation.

Solution leveraging LLMs/LMMs for MSA: Multimodal sentiment analysis, powered by LLMs, can transform smart furniture into emotionally intelligent companions. By analyzing a user's verbal cues (speech, tone), visual cues (facial expressions, body language), and even physiological signals (e.g., heart rate, skin conductance), the system can detect emotional states like fatigue, stress, or happiness.

Impact: This enables proactive and empathetic responses, such as automatically adjusting lighting for a fatigued user, playing calming music, or initiating a supportive dialogue. In automotive contexts, it can detect road rage or drowsiness, offering timely alerts and interventions. This leads to a more intuitive, supportive, and truly intelligent environment that anticipates user needs and emotional states.

Calculate Your Potential AI ROI

Estimate the efficiency gains and cost savings your enterprise could achieve by integrating advanced multimodal sentiment analysis solutions.

Estimated Annual Savings $0
Annual Hours Reclaimed 0

Your AI Implementation Roadmap

A typical journey for integrating LLM-powered multimodal sentiment analysis into your enterprise operations.

Phase 1: Discovery & Strategy

Duration: 2-4 Weeks

Assess current sentiment analysis capabilities, identify key multimodal data sources (text, images, audio, video), define sentiment targets (e.g., customer feedback, social media, internal communications), and establish clear business objectives. This phase involves a detailed needs assessment, data audit, and a strategic roadmap for LLM/LMM integration, including choice of models and infrastructure considerations.

Phase 2: Data Preparation & Model Customization

Duration: 4-8 Weeks

Collect, clean, and preprocess multimodal datasets, focusing on annotation and alignment across modalities. Develop or fine-tune LLM/LMM-based models using techniques like instruction tuning, prompt engineering, or parameter-efficient tuning for specific sentiment analysis tasks (e.g., aspect-based sentiment, sarcasm detection). This includes establishing robust cross-modal alignment mechanisms and handling data inconsistencies.

Phase 3: Integration & Deployment

Duration: 3-6 Weeks

Integrate the customized multimodal sentiment analysis solution into existing enterprise systems (e.g., CRM, customer service platforms, marketing tools). This involves API development, setting up scalable inference infrastructure, and conducting rigorous testing to ensure accuracy, performance, and robustness. Deploy the solution in a controlled environment, monitoring its performance in real-time.

Phase 4: Monitoring, Optimization & Scaling

Duration: Ongoing

Continuously monitor model performance, refine prompts, update training data, and optimize for evolving sentiment patterns and new data modalities. Address challenges like hallucination and prompt sensitivity. Scale the solution across more departments or use cases, expanding its application to areas like intelligent human-machine interaction or personalized customer experiences. Regular feedback loops and model updates ensure long-term value.

Ready to Transform Your Enterprise with AI?

Discover how LLM-powered multimodal sentiment analysis can unlock deeper insights and drive smarter decisions for your business.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking