Communication Ethics & AI Governance
Dissonance in the Algorithmic Era: Evaluating Showcase Digital Competence and Ethical Resilience in Communication Training
The disruptive acceleration of Generative Artificial Intelligence (GAI) has amplified the phenomenon of Global Friction (Globofriction), where technological speed undermines informational stability and weakens democratic resilience. Within higher education, this scenario demands training models capable of preparing future communicators to act as guarantors of truth amid automated erosion of discourse. This research evaluates the digital competence of Communication students through an interdisciplinary STEM-SSH (Science, Technology, Engineering, Mathematics—Social Sciences and Humanities) nexus approach based on the Kirkpatrick model. A mixed-methods methodology was employed, analyzing self-perception and cybersecurity data (n = 59), technical performance in the production of interactive infographics (n = 25), and qualitative evidence from reflection forums on systemic risks. The results reveal a “showcase digital competence”: a functional dissonance where future communicators demonstrate technical excellence under academic supervision but maintain negligent habits in their autonomous praxis. The study concludes that, given risks such as data porridge and strategic disinformation, it is urgent to transition toward a model of ethical resilience. This shift is imperative to reclaim the sovereignty of human judgment and ensure the integrity of public debate amidst current technological friction.
Key Findings & Executive Impact
This research uncovers a critical gap between theoretical knowledge and practical application in digital competence, highlighting the urgent need for ethical resilience in the age of AI.
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
What is "Showcase Digital Competence"?
The study reveals a 'showcase digital competence' where future communicators demonstrate high technical performance and theoretical understanding of digital norms when under academic supervision. However, this competence does not consistently translate into proactive, systematic security and ethical habits in their autonomous digital praxis, creating a significant behavioral gap.
Understanding the "Prosumer Gap"
Hypothesis 1 (H1) confirms a persistent competency dissonance: students possess theoretical knowledge (Level 2) but lack systematic security and verification habits in their autonomous digital praxis (Level 3). This 'Prosumer Gap' highlights inconsistencies like occasional file scanning, infrequent privacy policy reading, and systematic avoidance of offline modes, despite knowing the risks.
Cultivating Ethical Resilience
The research ratifies Hypothesis 3 (H3), showing that the intervention fostered critical awareness, leading students to denounce GAI outputs as 'data porridge' and a 'disinformation weapon'. This signifies a shift towards professional ethical resilience, where human judgment and accountability are prioritized over mere technical automation to maintain informational integrity.
The 'Showcase Digital Competence' Paradox
| Aspect | Academic Performance (Under Supervision) | Autonomous Praxis (Reality) |
|---|---|---|
| Antivirus Scanning |
|
|
| Privacy Policy Reading |
|
|
| Offline Mode Usage |
|
|
| Password Security |
|
|
The 'Data Porridge' Effect & Disinformation
Qualitative analysis reveals students perceive GAI output as 'data porridge' – a homogeneous mixture obscuring original authorship and accountability. This is seen as a potent 'disinformation weapon', especially when AI blends verified facts with 'hallucinations'.
Students become 'Ethical Prosecutors', demanding human judgment and accountability from AI developers to counter algorithmic opacity, highlighting the urgency for a paradigm shift toward ethical resilience.
Quantify Your AI Impact
Estimate the potential savings and reclaimed hours by implementing robust AI ethical governance and digital competence training in your organization.
Our Ethical AI Implementation Roadmap
A structured approach to transform your team's digital competence and ethical resilience.
Phase 1: Diagnostic Audit & Gap Analysis
Comprehensive assessment of current digital competence and ethical blind spots, identifying areas of "showcase" vs. genuine praxis.
Phase 2: Custom C-RIL Training Modules
Development of tailored collaborative learning interventions focusing on source verification, GAI auditing, and ethical decision-making.
Phase 3: Real-world Simulation & Behavioral Nudging
Situational exercises (e.g., disinformation audits) and systematic prompts to foster consistent, proactive ethical habits.
Phase 4: Continuous Reinforcement & Impact Measurement
Establishment of internal accountability frameworks, regular self-assessments, and ongoing monitoring for sustained ethical resilience.
Ready to Bridge Your AI Competency Gap?
Our experts can help your organization develop robust ethical AI governance and empower your teams with genuine digital resilience, moving beyond mere "showcase" competence.