Skip to main content
Enterprise AI Analysis: Strategic Use of Disinformation Terminology in Political Communication: Media Narratives of Delegitimisation

Enterprise AI Analysis

Strategic Use of Disinformation Terminology in Political Communication: Media Narratives of Delegitimisation

This study reveals how Spanish digital media strategically employ disinformation terminology in political communication, not as neutral transmitters but as active discursive actors shaping public opinion. It highlights the media's role in amplifying polarization and delegitimizing political opponents through biased framing and selective genre use.

Executive Impact at a Glance

Key metrics demonstrate the pervasive influence of strategic disinformation on media narratives and public perception.

0 Articles Analyzed
0 Begoña Gómez Mentions
0 González Amador Mentions
0 Editorials on Gómez Case

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Media Coverage
Journalistic Genres
Disinformation Terminology
Political Statements
Discursive Tone
101 vs. 77 González Amador received more articles than Begoña Gómez.

Enterprise Application: AI-driven sentiment analysis and trend monitoring can identify asymmetric media attention on key figures or topics, allowing for proactive reputation management and strategic communication.

Case Study: Asymmetrical Reporting in El País

El País, typically associated with a progressive editorial line, covered the González Amador case (21 articles) significantly more than the Begoña Gómez case (7 articles). This suggests a heterogeneous information criterion, possibly aimed at amplifying negative information about political opponents.

This selective amplification aligns with the strategic use of media to influence public perception and counter narratives, demonstrating that coverage is often conditioned by editorial stance and political context rather than solely journalistic interest.

Feature Begoña Gómez Case González Amador Case
Predominant Genres
  • Editorials (33.8%)
  • News items (31.2%)
  • Opinion pieces (26%)
  • Reporting (35.6%)
  • Editorials (31.7%)
  • News items (18.8%)
Narrative Orientation Strong evaluative and interpretative bias, explicit ideological frameworks, lack of investigative depth. Greater willingness to explore context, diverse sources, broader narrative.
Strategic Implication Media acting as a political actor, shaping events towards value judgements. Media emphasizing contextualization, while still presenting narrative bias.

Enterprise Application: AI-powered genre detection and narrative analysis tools can help organizations understand how different media types frame discussions around their brand or industry, enabling tailored content strategies.

Strategic Terminology Flow

Identify 'Fake' (Anglicism)
Employ 'Hoax' (Balanced Use)
Utilize 'Deception' (Contextual)
Avoid 'Lie' (Direct Accusation)

Enterprise Application: Natural Language Processing (NLP) models can track the prevalence and contextual use of specific terms (e.g., 'fake news', 'hoax') to detect disinformation campaigns and assess their rhetorical impact.

Finding: The "Fake" Phenomenon

The term "fake" was the most recurrent, with 82 mentions in the González Amador case and 70 in the Begoña Gómez case. This high frequency suggests a preference for Anglicisms, which often carry modern connotations and dissociate the discourse from strict legal language, enhancing rhetorical impact.

This aligns with the use of "fake news" in international politics as a rhetorical strategy to delegitimize critical information or opponents, rather than a neutral descriptor of false content. AI can help identify such patterns and their strategic intent.

33.8% of Gómez articles include disinformation terms in political statements.

Enterprise Application: AI-driven political discourse analysis can identify who (political actors, media, other actors) introduces disinformation terms, helping to map influence networks and accountability.

Aspect Begoña Gómez Case González Amador Case
Primary Actors Other political actors (12/26), Media (7/26), Involved leaders (7/26) Media (32/101), Other political actors (20/101), Involved leaders (15/101)
Editorial Practice (El País) Limited presence (7 articles) 100% of disinformation terms within statements (21/21 articles)
Strategic Implication Media reproduces discourse of others, or acts as neutral reporter. El País delegates evaluative weight to external actors, maintaining an appearance of objectivity.

Tone & Purpose Dynamics

Gómez: Defensive, Neutral Tone
Purpose: Assert Truth, Defend
Amador: Aggressive, Ironic Tone
Purpose: Polarize, Discredit

Enterprise Application: Advanced sentiment analysis and rhetorical device detection can uncover the underlying emotional charge and strategic intent behind communications, critical for crisis management and public relations.

Context: Informal Communication Amplifies Emotion

A predominance of statements in unregulated communication environments (33.8% Gómez, 31.7% Amador), particularly on social media (six instances in Amador case), facilitates more direct, polarizing, or rhetorical language. This reinforces the idea that emotional charge enhances receptivity and dissemination of false content.

This suggests that political actors strategically leverage informal channels to bypass institutional constraints and mobilize audiences emotionally, highlighting the urgent need for robust verification protocols and media literacy initiatives.

Calculate Your Potential AI ROI

Estimate the efficiency gains and cost savings your enterprise could achieve by implementing AI-powered communication analysis.

Estimated Annual Savings 0
Annual Hours Reclaimed 0

Your AI Implementation Roadmap

A phased approach to integrating AI for enhanced communication analysis and disinformation detection.

Phase 01: Assessment & Strategy (2-4 Weeks)

Initial audit of current communication practices, identification of key data sources, and definition of AI integration objectives. Develop a customized strategy for leveraging AI in media analysis.

Phase 02: Pilot & Data Integration (4-8 Weeks)

Implement AI tools on a limited dataset to validate effectiveness. Integrate key media monitoring platforms and internal communication channels with the AI system for initial data processing.

Phase 03: Full-Scale Deployment & Training (6-12 Weeks)

Roll out AI across all relevant communication functions. Provide comprehensive training to teams on using AI for disinformation detection, sentiment analysis, and narrative framing.

Phase 04: Optimization & Advanced Features (Ongoing)

Continuously monitor AI performance, refine models, and integrate advanced features like predictive analytics for emerging disinformation trends and real-time response mechanisms.

Ready to Transform Your Communication Strategy?

Embrace AI to navigate complex media landscapes, protect your reputation, and ensure clear, truthful communication. Our experts are ready to guide you.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking