Skip to main content
Enterprise AI Analysis: Symmetries and asymmetries between attitudes and interaction in relation to the emotional uses of LLMs

Enterprise AI Analysis

Symmetries & Asymmetries in LLM Emotional Engagement

This report analyzes "Symmetries and asymmetries between attitudes and interaction in relation to the emotional uses of LLMs," revealing critical insights into how users interact emotionally with Large Language Models (LLMs). The study highlights a nuanced relationship where declared attitudes often diverge from actual emotional engagement, driven by factors like anthropomorphization and perceived neutrality. Authored by Juan Pablo Duque Parra and Alejandro Santes Ortega.

Key Metrics & Enterprise Impact

The rapid evolution of Generative AI presents both unprecedented opportunities and complex challenges for enterprises. Understanding user emotional engagement with LLMs is crucial for responsible development and integration.

0 Mean Attitudinal Score (EAUE-GenAI Scale)
0 GenAI Investment (2024)
0 ChatGPT Users (March 2025)
0 Youth Daily AI Interaction

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Attitudes & Interaction Dynamics
Emotional Engagement Progression
Trust, Cognition & Risks
Methodology & Findings Integration

Attitudes vs. Practice: The Core Asymmetry

The study introduces a novel symmetry-asymmetry model to explain the complex relationship between declared attitudes towards emotional LLM use and actual interactional practices. While self-reported attitudes (measured by the EAUE-GenAI scale, mean M=2.47/5) suggest a low-to-moderate affective stance, qualitative data reveals a progressive scale of emotional engagement, indicating a significant disconnect. This divergence highlights that users' actual emotional interactions often exceed their consciously stated positions, shaped by situational conditions and the technology's affordances.

A Scale of Emotional Engagement

Qualitative analysis identifies three progressive levels of emotional engagement with LLMs: (1) Emergent Emotional Advice (60% of responses), where LLMs are used for situational guidance without attributing intentions; (2) Validation (22.81%), involving using LLMs to confirm or stabilize interpretations and emotions; and (3) Anthropomorphization (17.19%), the highest level, where users implicitly attribute mental states like understanding and companionship to the LLM, often referring to it as a "friend" or "colleague." This progression suggests a deepening relational complexity over time and with increased interaction density.

Intentional Attribution & Systemic Trust

The theoretical framework draws on Dennett's intentional stance and Luhmann's theory of systemic trust. Users tend to anthropomorphize LLMs by attributing intentions, simplifying complex system behavior and fostering emotional engagement. This is further supported by LLM designs that simulate natural conversation. While systemic trust allows interaction with "black box" systems, the opacity of LLMs raises concerns about reliability and ethics. The study warns that increased emotional involvement, particularly validation and anthropomorphization, can foster dysfunctional cognitive biases (e.g., trust and authority biases) and compromise user privacy by encouraging oversharing.

Mixed Methods for Complex Phenomena

Employing a concurrent mixed-methods design, the study combined 285 survey responses (EAUE-GenAI scale) with 35 semi-structured interviews. The EAUE-GenAI scale (α=0.90) revealed a predominantly unidimensional structure focused on AI-mediated emotional experience, despite initial hypotheses for a two-dimensional structure (expression and regulation). The qualitative discourse analysis constructed categories abductively, revealing the emotional engagement scale. The integration process, guided by a convergence/divergence logic, highlights the model's capacity to explain inconsistencies between declared attitudes and observed practices, reinforcing its validity through empirical contrast.

Key Quantitative Insight

2.47/5 Mean Attitudinal Score (EAUE-GenAI) for Emotional LLM Use, indicating a low-to-moderate, conservative stance.

Enterprise Process Flow: Methodological Pathway

Start: Do attitudes predict emotional uses of generative AI?
Evidence: Likert (declared attitudes) vs. discourse analysis (observed practices)
Finding: no direct correspondence between attitudes and practices
Conclusion: attitudes alone do not explain emergence/intensity/variation
Implication: the Symmetry-Asymmetry model is supported and helps analyze divergence (validity via empirical contrast)

Symmetry & Asymmetry in LLM Interaction

Category Subcategory Key Implications for Enterprise AI
AI as a tool (Symmetry) Support; Automation
  • Primarily instrumental use.
  • Emotional distance often maintained.
  • Alignment between declared attitudes and functional practices.
Emotional interaction scale (Asymmetry) Emergent emotional advice; Validation; Emotional anthropomorphization (AI as a friend)
  • Increasingly complex emotional engagement.
  • Divergence from explicitly declared attitudes.
  • Potential for privacy issues, cognitive biases, and unexpected user dependencies.

Real-World Emotional Engagement with LLMs: User Excerpts

These user excerpts illustrate the diverse and often unacknowledged emotional roles LLMs play, demonstrating a progression from instrumental use to deeply relational engagement:

Emergent Emotional Advice: "Once, I asked it something personal, for example, how to cope with a situation in which I was feeling very bad."

Validation Seeking: "Because it is neutral, I choose to talk to AI, since unlike with a person, I don't feel judged."

Anthropomorphization & Relational Use: "Both: on the one hand, as a great tool that helps simplify tasks, and as a friend, since if I ask it about my emotions... it can advise me."

Quantify Your AI Efficiency Gains

Estimate the potential annual cost savings and reclaimed work hours by integrating generative AI solutions into your enterprise operations.

Estimated Annual Savings $0
Estimated Annual Hours Reclaimed 0

Your Path to Responsible AI Integration

Based on these insights, a structured approach is vital for implementing LLMs within your organization, balancing innovation with ethical considerations and user well-being.

Phase 1: Strategic Assessment & Ethical Framework

Evaluate current interaction patterns with LLMs, identify potential emotional engagement points, and establish an ethical framework for responsible AI use within your enterprise. Focus on data privacy, bias detection, and user support mechanisms.

Phase 2: Pilot Programs & User Feedback Loops

Launch controlled pilot programs for specific LLM applications, closely monitoring user interactions. Implement continuous feedback loops to gather data on emotional responses and refine guidelines based on real-world usage patterns.

Phase 3: Training & Awareness Campaigns

Develop comprehensive training programs for employees on effective and ethical LLM interaction. Conduct awareness campaigns to highlight the distinction between AI capabilities and human-like attributes, mitigating anthropomorphization risks and fostering critical engagement.

Phase 4: Scaled Deployment & Continuous Governance

Gradually scale LLM deployment across the organization, ensuring robust governance models are in place. Continuously monitor for emerging emotional engagement patterns, conduct regular audits, and adapt policies to maintain alignment with organizational values and user well-being.

Ready to Navigate the Emotional Landscape of AI?

Understand how your enterprise can leverage LLMs effectively while mitigating risks related to user emotional engagement and trust. Book a personalized consultation with our AI strategists.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking