Enterprise AI Analysis
Artificial Intelligence and Disinformation: A State-of-the-Art Review Through a Systematized Literature Review
The impact of artificial intelligence (AI) extends across virtually all sectors of society, including communication. One of the areas in which its influence is expected to be most significant is disinformation, arguably one of the greatest challenges faced by networked societies over the past decade. Through a systematized literature review with a scoping orientation, this study examines how research on artificial intelligence and disinformation has evolved over the last five years and identifies the main thematic strands structuring this field. The analysis of 62 articles reveals a predominance of qualitative approaches (53.3%) and a technocentric perspective structured around five main research lines: (1) AI as a source of disinformation, (2) AI as a tool to combat it, (3) regulatory frameworks, (4) deepfakes, and (5) algorithmic literacy. These findings highlight both the consolidation of the field and the need to advance toward more interdisciplinary and transfer-oriented research.
Key Executive Insights & Metrics
This research provides critical quantitative and qualitative data points to inform your enterprise AI strategy, focusing on both risks and opportunities in the age of disinformation.
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
AI as a tool to combat disinformation
Description: Studies analyzing the use of AI algorithms to identify, classify, and track disinformative content (text, image, audio, or video).
Main Object of Analysis: Automated detection systems, algorithmic verification, diffusion pattern analysis.
Enterprise Insight: AI tools show promise in automated disinformation detection, particularly for textual content. However, challenges remain in handling multimedia, irony, and multilingual nuances. Hybrid human-AI models are crucial for effective fact-checking and identifying disinformation agents, with professional judgment at the core.
AI as a source or amplifier of disinformation
Description: Research focusing on AI as an agent that produces or amplifies false or misleading content.
Main Object of Analysis: Generative models, automation of false narratives, scalability of disinformation.
Enterprise Insight: Generative AI, including LLMs, presents significant risks as a source and amplifier of disinformation. It enables the scalable production of fake content, academic dishonesty, impersonation, and propaganda. Critical awareness and robust regulatory frameworks are essential to mitigate these threats and ensure democratic integrity.
Regulation, ethics, and governance of AI in relation to disinformation
Description: Normative and legal approaches analyzing regulatory frameworks, public policies, and self-regulation mechanisms.
Main Object of Analysis: Legislation, ethical codes, algorithmic governance, platform accountability.
Enterprise Insight: The urgent need for robust regulation and ethical frameworks for AI and disinformation is a recurring theme. Studies emphasize the European AI Act, data protection, and the need for hybrid models of co-regulation involving governments, media, academia, and civil society. Accountability for platforms and responsible innovation are key to navigating geopolitical tensions.
Deepfakes and audiovisual manipulation
Description: A specific line of research addressing the synthetic generation of hyper-realistic images, audio, and video for disinformative purposes.
Main Object of Analysis: Political, media, or personal deepfakes; audiovisual manipulation.
Enterprise Insight: Research on deepfakes reveals their potential to erode trust and foster informational cynicism, though their actual prevalence in electoral contexts may be less than perceived. Users often struggle to detect them visually, highlighting the need for cognitive detection strategies. Hybrid approaches combining technical and social solutions are vital.
AI as an educational tool and media literacy
Description: Studies exploring the use of AI to educate citizens and enhance resilience to disinformation.
Main Object of Analysis: Educational tools, intelligent assistants, personalized learning.
Enterprise Insight: AI can play a role in enhancing media and algorithmic literacy. Educational initiatives, such as workshops, improve critical awareness of disinformation. However, traditional adoption models are insufficient; integrating ethical and psychosocial dimensions, along with new pedagogical methods, is crucial to harness AI's educational potential responsibly.
Systematized Literature Review Process
Research Methodology Dominance
0 of studies employed qualitative methods, highlighting the need for nuanced, context-rich analysis.| AI as Source/Amplifier | AI as Combat Tool |
|---|---|
|
|
|
|
Case Study: European Union's AI Act
Regulatory Milestone in AI Governance
The European Union's AI Act (AIA) represents a significant milestone in regulating AI, particularly addressing its role in disinformation. While lauded for its comprehensive approach, studies highlight gaps in risk classification and accountability, especially concerning deepfakes and media applications. The AIA aims to balance innovation with ethical safeguards, underscoring the complexities of governing rapidly evolving AI technologies in a democratic context. This case exemplifies the urgent need for adaptable regulatory frameworks that can keep pace with technological advancements and societal impacts.
- Identifies AI's role in disinformation as a central concern.
- Highlights the challenge of balancing innovation with ethical safeguards.
- Reveals gaps in current legislation regarding deepfakes and media-specific AI applications.
- Emphasizes the need for ongoing adaptation and interdisciplinary collaboration for effective governance.
Calculate Your Potential ROI from AI Implementation
Estimate the efficiency gains and cost savings your enterprise could achieve by strategically implementing AI solutions, informed by the latest research.
Your AI Implementation Roadmap
A structured approach to integrating AI, mitigating disinformation risks, and maximizing ethical benefits within your organization.
Phase 1: Foundation & Assessment
Conduct an AI Readiness Assessment and Disinformation Risk Audit to identify current capabilities, vulnerabilities, and critical areas for intervention within your enterprise.
Phase 2: Strategy & Pilot Implementation
Develop a tailored AI/Disinformation strategy and pilot hybrid human-AI detection systems to establish a clear roadmap, test efficacy of solutions, and gather insights for broader rollout.
Phase 3: Integration & Training
Integrate AI tools into existing workflows and implement comprehensive media/algorithmic literacy training to empower employees, enhance organizational resilience, and ensure ethical AI deployment.
Phase 4: Governance & Iteration
Establish robust governance frameworks, monitor regulatory compliance, and continuously refine AI strategies to maintain compliance, adapt to evolving threats, and foster a culture of responsible AI use.
Ready to Transform Your Enterprise with Responsible AI?
Leverage cutting-edge research to develop an AI strategy that enhances efficiency, mitigates risks, and fosters trust. Our experts are ready to guide your journey.