Skip to main content
Enterprise AI Analysis: Autonomation, Not Automation: Activities and Needs of European Fact-checkers as a Basis for Designing Human-centered AI Systems

Enterprise AI Analysis

Autonomation, Not Automation: Activities and Needs of European Fact-checkers as a Basis for Designing Human-centered AI Systems

This study investigates the activities and needs of European fact-checkers to inform the design of human-centered AI tools to counter false information. Through semi-structured interviews and a quantitative survey, it identifies key problems, particularly in monitoring online space, searching for evidence, and mapping misinformation versions. The findings emphasize the need for AI systems that augment human performance rather than fully automate, focusing on tasks like check-worthy document detection, mapping fact-checks to new content, and summarization/personalization.

Executive Impact

Understand the scale of our findings and the potential for AI to transform fact-checking operations.

0 European Fact-Checkers Surveyed
0 Active European IFCN Signatories Covered
0 AI Tasks Identified for Support

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Introduction & Methodology
Monitoring Online Space
Claim Selection & Verification
Dissemination & AI Implications

Human-centered AI for Fact-checking

The research highlights that full automation of fact-checking is met with skepticism by both fact-checkers and researchers due to tasks requiring human judgment. The real potential lies in assisted fact-checking using 'autonomation' principles, where AI empowers humans. Current AI-assisted tools often don't align with actual fact-checker needs, stressing the importance of human-centered AI (HCAI) design, which involves users from early stages.

Enterprise Process Flow

Semi-structured interviews with Central-European fact-checkers
Iterative content analysis (Activities, Problems and needs)
Validation with previous research of fact-checkers' practices
Visualization by conceptual models of activities and practices (Extended fact-checking process)
Selection and validation by quantitative validation survey with European fact-checkers
Visualization of the most important problems
Implications for AI tasks and tools

Challenges in Monitoring Online Space

Monitoring is globally recognized as one of the hardest parts of fact-checking. Central European fact-checkers face challenges with limited tools for non-English languages. They used CrowdTangle, which overemphasized virality, and its replacement, Meta Content Library, lacks features. Paid proprietary tools like X Pro further limit access. Fact-checkers often resort to creating fake profiles to leverage platform personalization and identify misinformation more effectively, especially from instant messaging services like WhatsApp, which lack automated monitoring means.

800 Social Media Pages Monitored by some fact-checkers

Selecting and Verifying Claims

Fact-checkers spend considerable time filtering and prioritizing content. Selection criteria include timeliness, factuality, representativeness, controversiality, and societal impact, often prioritizing issues affecting minorities. While verification requires human judgment, evidence retrieval is seen as a promising candidate for partial automation. Primary sources like official statistics and government websites are crucial, but often hard to access. Video content transcription in local languages is a significant need. Previously fact-checked claims are highly credible, but hard to search, especially cross-lingually.

Problem Area Eastern Europe Focus Western Europe Focus
Existing Fact-Checks Search
  • Lack of resources
  • Language barriers
  • Google Fact-Check Explorer coverage
  • Better coverage
Image Manipulations
  • Capacity constraints focus on text
  • Higher capacity
  • Focus beyond text
Instant Messaging Alerts
  • Capacity constraints focus on text
  • Higher capacity
  • Focus beyond text
Source Evidence for Verification
  • High importance
  • High importance
Other Misinformation Versions
  • High importance
  • High importance

Dissemination and AI Support

Dissemination primarily occurs on social media, especially Facebook. Small NGOs face challenges with limited technological capabilities for SEO, ClaimReview, and visually appealing content. The biggest problems for European fact-checkers relate to monitoring and verification, particularly searching for evidence in hard-to-find documents, videos, and statistics. New AI tasks proposed include check-worthy document detection, mapping existing fact-checks to additional online content, and fact-check summarization and personalization.

AI in Action: Fact-check Finder

Fact-check Finder is a novel tool developed to address the urgent need for retrieving previously fact-checked claims, especially across languages. It leverages semantic cross-lingual search by calculating similarity between sentence embeddings created by multilingual language models. This tool, co-created with fact-checkers, directly responds to the identified challenge of finding existing debunks efficiently, which is a key problem for European fact-checkers, particularly those dealing with low-resource languages.

Quantify Your AI Advantage

Estimate the potential time and cost savings AI can bring to your fact-checking operations.

Estimated Annual Savings $0
Hours Reclaimed Annually 0

Your AI Transformation Roadmap

A strategic overview of how we can integrate human-centered AI into your fact-checking workflow.

Phase 1: Needs Assessment & Data Curation

Collaborate with fact-checkers to refine requirements for check-worthy document detection and cross-lingual claim retrieval. Focus on curating diverse, multilingual datasets from primary sources (official statistics, governmental reports, audio-visual content with transcripts) that are relevant to European contexts. Establish clear guidelines for data annotation to ensure high-quality training data for AI models.

Phase 2: Core AI Model Development & Integration

Develop and fine-tune large language models (LLMs) for check-worthy document detection, prioritizing societal impact and regional relevance over mere virality. Implement robust cross-lingual retrieval systems for existing fact-checks and misinformation versions, utilizing multilingual embedding models. Integrate these core AI capabilities into a human-centered interface, allowing fact-checkers to easily query, filter, and review results, with transparent explanations from the AI.

Phase 3: User Feedback, Iteration & Advanced Features

Deploy initial AI prototypes for real-world testing with European fact-checkers, gathering iterative feedback to refine accuracy and usability. Develop advanced features such as automatic transcription of audio/video, dynamic mapping of debunks to new content, and AI-assisted summarization/personalization of fact-checks for various platforms and audiences. Continuously monitor model performance and retrain with new data, ensuring adaptability to evolving misinformation tactics and language nuances.

Ready to Empower Your Enterprise?

Connect with our AI specialists to tailor a human-centered AI strategy for your organization's unique needs.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking