AI EMOTIONS IN HIGHER EDUCATION
Feeling AI: Circulating emotions, institutional climates, and moral boundaries in student use of AI
This study explores how students emotionally and morally engage with AI in higher education, drawing on Sara Ahmed's affective economies. A national survey (n=8021) and qualitative focus groups (n=79) reveal a complex interplay of optimism, excitement, skepticism, and worry. Emotions like relief, guilt, gratitude, and vigilance circulate around AI, shaping perceptions of assessment, learning, and creativity. The analysis highlights how institutional affective climates mobilize pride, shame, and moral anxiety, positioning AI as an affective actor entangled with ideals of effort and authenticity. Students navigate complex ethical decisions, underscoring the need for critical affective literacy and pedagogical trust.
Authored by Glenys Oberg, Yifei Liang, Margaret Bearman, Tim Fawns, Michael Henderson, Kelly E. Matthews.
Executive Impact: Navigating the Affective Landscape of AI in HE
Higher Education institutions are at a critical juncture, balancing the promise and peril of AI. Understanding student emotional and moral responses is key to fostering an ethically sound and effective learning environment.
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
Assessment: Vigilance, Fear, and Moral Boundaries
Assessment is a primary site where emotions like vigilance, uncertainty, and fear coalesce. Students expressed a 'potential danger' associated with AI use, fearing accusations of plagiarism and punitive scrutiny. This reveals how institutional policies, often driven by academic integrity concerns, create an 'affective governance' that fosters self-policing and moral anxiety. Some students developed 'moral superiority' by resisting AI 'shortcuts,' while others felt 'guilt and fear' even for minor uses, highlighting the 'flickering' legitimacy of their academic self.
Key Concepts: Affective governance, Moral affective boundaries, Vigilance & Fear.
Learning: Emotional Reversal and Authenticity Struggles
Student experiences of learning with AI demonstrate an emotional reversal, moving from initial excitement and relief to guilt, anxiety, and self-doubt. The perceived 'laziness' of using AI triggers anxiety about effort and learning authenticity. Students confront a 'moral discomfort' as AI challenges long-held academic values of 'hard-earned' knowledge, leading to shame 'sticking' to AI when it's seen as a threat to genuine effort. Boundary-policing emerges, differentiating 'benign' technical assistance (e.g., grammar checks) from core intellectual work (e.g., research), which must remain 'authentic.' Some students, however, reframe AI use as 'smart and pragmatic,' aligning with professional norms.
Key Concepts: Emotional reversal, Affective stickiness (shame/AI), Authenticity & Effort.
Creativity & Voice: Protecting Selfhood and Originality
Students hold deep emotional investments in authorship and originality, viewing them as moral and existential markers. AI is perceived by many as a threat that 'kills the writer's voice,' leading to fears of losing personal identity, becoming 'just useless,' a 'copy-paster,' or a 'robot.' This 'affective protest' mobilizes emotions like guilt, pride, and fear of mediocrity to police the boundaries of acceptable AI use. Students' narratives reveal a strong desire to protect their 'authentic' creative self, aligning with university 'affective scripts' that valorize original voice and effort as 'sacred markers of humanity.' Even optimism about AI is often conditional on using it 'carefully to stay oneself.'
Key Concepts: Personal sovereignty, Emotional protest, Authentic creation.
Enterprise Process Flow: Understanding Student-AI Affective Dynamics
| AI as a Collaborator | AI as a Threat |
|---|---|
|
|
Case Study: Ella's Ambivalence – From Gratitude to Guilt
Ella (FG10, UniA) vividly illustrates the complex emotional landscape students navigate with AI. Initially expressing deep gratitude for AI's assistance, particularly as an international student new to the city, she quickly transitions to self-critique and guilt.
“I'm so grateful there's AI... but I feel like I'm getting lazier because I rely on ChatGPT a lot... it planned it (journey in the city) out for me, and you stop kind of thinking about things.”
Ella's narrative captures the immediate relief and perceived utility of AI, rapidly followed by an anxiety about personal effort and 'learning authenticity.' This emotional reversal demonstrates how the 'promise of easy feedback' quickly gave way to 'doubt,' forcing a negotiation between immediate convenience and deeply ingrained academic values. Her experience reflects the 'affective stickiness' of shame to AI when it is perceived as enabling 'laziness,' undermining the ideal of diligent intellectual engagement.
Advanced ROI Calculator: Quantify AI's Impact in Your Institution
Estimate the potential annual cost savings and hours reclaimed by strategically integrating AI solutions across your higher education institution.
Strategic AI Implementation Roadmap for HE
Transforming institutional approaches to AI requires a phased, thoughtful roadmap that prioritizes ethical integration and student well-being.
Phase 1: Affective Audit & Dialogue
Conduct internal audits to understand existing emotional climates around AI among students and staff. Initiate open, ethical dialogues to surface fears, hopes, and moral boundaries without judgment. (Duration: 1-3 months)
Phase 2: Develop Critical Affective Literacy (CAL) Framework
Integrate CAL into curriculum and faculty development. Equip students and educators to critically analyze emotional responses to AI, recognizing how feelings shape perceptions and use. Shift focus from policing to understanding. (Duration: 3-6 months)
Phase 3: Cultivate Pedagogical Trust & Relational Engagement
Design assessment and learning activities that make AI visible and negotiable. Foster reciprocal learning relationships where uncertainty is embraced, and AI is co-navigated rather than rigidly controlled. (Duration: 6-12 months)
Phase 4: Policy & Infrastructure Alignment
Revise institutional policies to align with CAL and trust-based pedagogies. Invest in ethical AI infrastructure that supports responsible use, transparency, and student agency, avoiding surveillance-heavy approaches. (Duration: 12-18 months)
Ready to Transform Your Institution's AI Strategy?
Don't let the complex emotional landscape of AI hinder your progress. Partner with our experts to develop a bespoke AI strategy that fosters innovation, academic integrity, and student well-being.