Enterprise AI Analysis
Overstating Attitudes, Ignoring Networks: LLM Biases in Simulating Misinformation Susceptibility
This paper evaluates whether Large Language Models (LLMs) can accurately simulate human susceptibility to misinformation. It finds that LLMs reproduce broad distributional tendencies but consistently overstate the association between belief and sharing. LLM-generated responses prioritize attitudinal and behavioral features over personal network characteristics, reflecting systematic biases in how misinformation-related concepts are represented in their training data and reasoning. The study suggests LLMs are better for diagnosing divergences from human judgment than for substituting it.
Executive Impact Summary
Our analysis reveals critical discrepancies between human and LLM-simulated misinformation susceptibility, highlighting areas for strategic AI integration.
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
LLMs Overstate Belief-Sharing Correlation
LLM simulations show near-monotonic correlations between belief and sharing, substantially exceeding human observations. This attenuates the empirical dissociation present in human behavior.
| Metric | Value 1 | Value 2 | Value 3 |
|---|---|---|---|
| Human Data (ρ) | Health: 0.606 | Climate: 0.418 | Political: 0.342 |
| LLM Data (ρ) - Avg. | Health: ~0.93 | Climate: ~0.96 | Political: ~0.76 |
LLMs Inflate Explained Variance in Susceptibility Models
Regression models fitted on LLM-simulated outputs achieve substantially higher R² values than those fitted on human data, often by an order of magnitude.
| Metric | Value 1 | Value 2 | Value 3 |
|---|---|---|---|
| Human R² (max) | 0.23 (Health Sharing) | 0.205 (Climate Sharing) | 0.228 (Political Belief) |
| LLM R² (max) - Avg. | 0.874 (Health Sharing) | 0.752 (Climate Sharing) | 0.751 (Political Belief) |
In LLM-simulated data, removing network features results in little or no loss in predictive performance, suggesting LLMs largely ignore social network characteristics.
LLM outputs place greater weight on salient attitudinal and behavioral features like 'trust in science', 'political leaning', and 'social media use', showing larger deviations from human effects for these predictors.
LLM Reasoning Pathway for Misinformation Simulation Bias
LLMs: Diagnostic Tool, Not Human Substitute
While LLMs can reveal systematic divergences from human judgment, their limitations—such as exaggerated relationships, inflated explained variance, and disproportionate emphasis on individual over relational factors—mean they cannot simply substitute for human data. They are effective for diagnosing biases rather than perfectly replicating social processes.
- ✓ LLM simulations yield overly regularized relationships.
- ✓ They overemphasize individual-level factors (attitudes, demographics).
- ✓ They underrepresent social interaction, diffusion, and peer influence.
- ✓ Useful for exploratory or diagnostic purposes rather than as stand-ins for human data.
Prior research shows that the composition and structure of personal networks are key predictors of misinformation belief and sharing, a factor largely overlooked by current LLM simulations.
Human vs. LLM: Network Feature Weight in Prediction
The discrepancy in how network features contribute to predictive power highlights a fundamental difference in how LLMs and humans process social information related to misinformation.
| Comparison | Observation |
|---|---|
| Human Data |
|
| LLM Data |
|
Quantify Your AI Impact
Estimate potential time savings and cost efficiencies by optimizing your enterprise AI strategy.
Your AI Implementation Roadmap
A phased approach to integrate our insights and develop robust, bias-aware AI solutions tailored for your business.
Phase 01: Discovery & Assessment
Comprehensive audit of existing AI systems, data pipelines, and current misinformation susceptibility challenges. Identify key biases and data gaps.
Phase 02: Strategy & Solution Design
Develop a tailored strategy for bias mitigation, integrating network-aware models and robust evaluation frameworks. Design custom LLM-based simulation protocols.
Phase 03: Development & Integration
Implement and fine-tune AI models, ensuring they reflect nuanced human social dynamics. Seamlessly integrate solutions into your enterprise architecture.
Phase 04: Monitoring & Optimization
Continuous monitoring of AI performance against real-world human data. Iterative refinement and optimization to maintain accuracy and ethical alignment.
Ready to Refine Your AI Strategy?
Don't let hidden biases and misaligned simulations compromise your enterprise AI initiatives. Partner with Own Your AI to ensure your models accurately reflect real-world human behavior and social dynamics.