Skip to main content
Enterprise AI Analysis: Overstating Attitudes, Ignoring Networks: LLM Biases in Simulating Misinformation Susceptibility

Enterprise AI Analysis

Overstating Attitudes, Ignoring Networks: LLM Biases in Simulating Misinformation Susceptibility

This paper evaluates whether Large Language Models (LLMs) can accurately simulate human susceptibility to misinformation. It finds that LLMs reproduce broad distributional tendencies but consistently overstate the association between belief and sharing. LLM-generated responses prioritize attitudinal and behavioral features over personal network characteristics, reflecting systematic biases in how misinformation-related concepts are represented in their training data and reasoning. The study suggests LLMs are better for diagnosing divergences from human judgment than for substituting it.

Executive Impact Summary

Our analysis reveals critical discrepancies between human and LLM-simulated misinformation susceptibility, highlighting areas for strategic AI integration.

0.0 Human Belief/Sharing ρ (avg)
0.0 LLM Belief/Sharing ρ (avg)
0.0 Human R² (max)
0.0 LLM R² (max)

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

LLM Misinformation Simulation
LLM Behavioral Biases
Social Network Impact

LLMs Overstate Belief-Sharing Correlation

LLM simulations show near-monotonic correlations between belief and sharing, substantially exceeding human observations. This attenuates the empirical dissociation present in human behavior.

Metric Value 1 Value 2 Value 3
Human Data (ρ) Health: 0.606 Climate: 0.418 Political: 0.342
LLM Data (ρ) - Avg. Health: ~0.93 Climate: ~0.96 Political: ~0.76

LLMs Inflate Explained Variance in Susceptibility Models

Regression models fitted on LLM-simulated outputs achieve substantially higher R² values than those fitted on human data, often by an order of magnitude.

Metric Value 1 Value 2 Value 3
Human R² (max) 0.23 (Health Sharing) 0.205 (Climate Sharing) 0.228 (Political Belief)
LLM R² (max) - Avg. 0.874 (Health Sharing) 0.752 (Climate Sharing) 0.751 (Political Belief)
Network features contribute little to prediction for LLMs, unlike human data where they are significant predictors of misinformation susceptibility.

In LLM-simulated data, removing network features results in little or no loss in predictive performance, suggesting LLMs largely ignore social network characteristics.

LLMs Overemphasize Attitudinal/Behavioral Predictors Systematic bias toward individual attitudes and behaviors in misinformation susceptibility.

LLM outputs place greater weight on salient attitudinal and behavioral features like 'trust in science', 'political leaning', and 'social media use', showing larger deviations from human effects for these predictors.

LLM Reasoning Pathway for Misinformation Simulation Bias

Training Data Associations (e.g., 'trust in science' linked to 'misinformation')
Model Reasoning Traces (Chain-of-Thought emphasizes individual attributes)
Simulated Outputs (Exaggerated effects of attitudinal/behavioral features, minimal network impact)

LLMs: Diagnostic Tool, Not Human Substitute

While LLMs can reveal systematic divergences from human judgment, their limitations—such as exaggerated relationships, inflated explained variance, and disproportionate emphasis on individual over relational factors—mean they cannot simply substitute for human data. They are effective for diagnosing biases rather than perfectly replicating social processes.

  • ✓ LLM simulations yield overly regularized relationships.
  • ✓ They overemphasize individual-level factors (attitudes, demographics).
  • ✓ They underrepresent social interaction, diffusion, and peer influence.
  • ✓ Useful for exploratory or diagnostic purposes rather than as stand-ins for human data.
Human susceptibility is multi-dimensional shaped by demographics, cognition, attitudes, AND personal networks.

Prior research shows that the composition and structure of personal networks are key predictors of misinformation belief and sharing, a factor largely overlooked by current LLM simulations.

Human vs. LLM: Network Feature Weight in Prediction

The discrepancy in how network features contribute to predictive power highlights a fundamental difference in how LLMs and humans process social information related to misinformation.

Comparison Observation
Human Data
  • Removing network features leads to substantial reductions in model performance.
LLM Data
  • Removing network features results in little or no loss in predictive performance.

Quantify Your AI Impact

Estimate potential time savings and cost efficiencies by optimizing your enterprise AI strategy.

Estimated Annual Savings $0
Hours Reclaimed Annually 0

Your AI Implementation Roadmap

A phased approach to integrate our insights and develop robust, bias-aware AI solutions tailored for your business.

Phase 01: Discovery & Assessment

Comprehensive audit of existing AI systems, data pipelines, and current misinformation susceptibility challenges. Identify key biases and data gaps.

Phase 02: Strategy & Solution Design

Develop a tailored strategy for bias mitigation, integrating network-aware models and robust evaluation frameworks. Design custom LLM-based simulation protocols.

Phase 03: Development & Integration

Implement and fine-tune AI models, ensuring they reflect nuanced human social dynamics. Seamlessly integrate solutions into your enterprise architecture.

Phase 04: Monitoring & Optimization

Continuous monitoring of AI performance against real-world human data. Iterative refinement and optimization to maintain accuracy and ethical alignment.

Ready to Refine Your AI Strategy?

Don't let hidden biases and misaligned simulations compromise your enterprise AI initiatives. Partner with Own Your AI to ensure your models accurately reflect real-world human behavior and social dynamics.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking