Skip to main content
Enterprise AI Analysis: Presenting Large Language Models as Companions Affects What Mental Capacities People Attribute to Them

Enterprise AI Analysis

Presenting Large Language Models as Companions Affects What Mental Capacities People Attribute to Them

Authors: Allison Chen, Sunnie S. Y. Kim, Angel Nathaniel Franyutti-Cintron, Amaya Dharmasiri, Kushin Mukherjee, Olga Russakovsky, Judith E. Fan

Publication: CHI '26, Barcelona, Spain

How might messages about large language models (LLMs) found in public discourse influence the way people think about and interact with these models? To explore this question, we randomly assigned participants (N = 470) to watch short informational videos presenting LLMs as either machines, tools, or companions—or to watch no video. We then assessed how strongly they believed LLMs to possess various mental capacities, such as the ability to have intentions or remember things. We found that participants who watched video messages presenting LLMs as companions reported believing that LLMs more fully possessed these capacities than did participants in other groups. In a follow-up study (N = 604), we replicated these findings and found nuanced effects on how these videos also impact people's reliance on LLM-generated responses when seeking out factual information. Together, these studies suggest that messages about LLMs—beyond technical advances—may shape what people believe about these systems and how they rely on LLM-generated responses.

Executive Impact

This paper reveals that the way Large Language Models (LLMs) are presented significantly influences public perception and interaction. Specifically, framing LLMs as 'companions' leads users to attribute more human-like mental and emotional capacities. Conversely, presenting LLMs as 'machines' can foster skepticism and reduce over-reliance on inconsistent outputs. These findings highlight the critical role of communication in shaping user beliefs and behaviors toward AI systems, urging careful consideration of messaging beyond technical specifications.

Participants (Study 1)
Participants (Study 2)
Increased Mental Capacity Attribution (Companions)
Reduced Reliance (Machines, Inconsistent)

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Influence of Messaging
Attribution of Mental Capacities
Reliance on LLMs
Implications & Future Work

The study's core finding is that different messages about LLMs significantly impact how people perceive their mental capacities and, to some extent, their reliance behaviors.

2.76 Average mental capacity attribution score for 'Companion' condition, higher than other groups.

Participants exposed to the 'LLMs as companions' message attributed significantly more emotional and cognitive capacities to LLMs compared to those who viewed LLMs as 'machines' or 'tools', or no video at all. This suggests that framing deeply influences anthropomorphism and perceived capabilities beyond mere technical features.

Messaging to Perception Flow

Video Message (Machine/Tool/Companion)
User Beliefs about LLM Nature
Attribution of Mental Capacities
Reliance Behavior

A follow-up study nine months later replicated these findings, demonstrating the robustness of messaging effects even amidst a rapidly changing AI landscape. This highlights the enduring impact of initial conceptualizations on user understanding.

57.05 Reliance rate on inconsistent LLM responses when LLMs were presented as 'machines', lower than other groups.

The 'LLMs as machines' message led to reduced reliance on logically inconsistent LLM-generated responses. This implies that mechanistic framing can foster a healthier skepticism and vigilance in users, which is crucial for safe and trustworthy AI interactions, especially in information retrieval tasks.

This section details how participants attributed various mental capacities (cognitive, emotional, physiological) to LLMs under different messaging conditions.

Through exploratory factor analysis, 40 mental capacities were grouped into 'emotional', 'cognitive', and 'physiological' categories. Regardless of the video, participants generally attributed more 'cognitive' capacities than 'emotional' or 'physiological' ones to LLMs, aligning with prior research.

Capacity Category Companion Condition Other Conditions
Cognitive Capacities
  • More fully developed
  • Less attributed
Emotional Capacities
  • More fully developed
  • Less attributed
Physiological Capacities
  • Similar levels
  • Similar levels

The 'companion' message uniquely boosted attributions of both emotional and cognitive capacities, but not physiological ones. This nuanced effect suggests that companion framing encourages a more holistic, human-like perception of AI's internal states, beyond mere physical presence.

Examines how different messages about LLMs affect user reliance on their outputs, particularly in factual information retrieval tasks, considering correctness and consistency.

Participants' reliance on LLM responses was primarily sensitive to the logical consistency of the output, rather than its correctness. This finding, consistent with previous work, suggests users prioritize internal coherence over external factual accuracy when assessing AI outputs.

75.49 Overall agreement rate with Theta's answers across all conditions and consistency types.

Crucially, the 'LLMs as machines' message significantly reduced reliance on inconsistent LLM responses. This indicates that framing AI as a machine encourages a more critical evaluation of its outputs, leading to increased vigilance when the system contradicts itself.

Message Condition Reliance on Consistent Outputs Reliance on Inconsistent Outputs
Machines
  • High (87.26%)
  • Lower (57.05%)
Tools
  • High (84%)
  • Higher (66%)
Companions
  • High (84%)
  • Higher (61%)
No Video
  • High (85%)
  • Higher (65%)

These findings suggest that messaging can be a powerful tool to shape appropriate reliance behaviors, encouraging users to be more discerning when interacting with LLMs, especially concerning internal consistency. This is vital for developing safer and more trustworthy AI applications.

Discusses the broader implications of anthropomorphism, communication strategies for AI literacy, and outlines areas for future research.

Presenting LLMs as companions can be seen as a subtle form of anthropomorphism, attributing human-like emotional and cognitive qualities. While this can make AI more approachable, it raises concerns about miscalibrated expectations, over-reliance, and emotional dependency.

The study highlights the importance of thoughtful AI communication. Short, educational messages, like those framing LLMs as machines or tools, can promote cautious behavior and increase AI literacy by emphasizing mechanistic understanding rather than human-like attributes.

Future Research Directions

Longitudinal Studies (repeated exposure)
Ecological Validity (real-world contexts)
Mediation Analysis (psychological mechanisms)
Wider Range of Tasks (high-stakes, personal)

Future work should explore these effects in real-world settings with diverse messages and tasks, potentially using formal mediation analyses to uncover the underlying psychological mechanisms. This will help guide the development of safe and trustworthy LLM-based applications.

Estimate Your Enterprise AI Impact

See how adopting a mindful AI communication strategy can improve team efficiency and reduce risks by shaping appropriate user perceptions and reliance.

Potential Annual Savings $0
Hours Reclaimed Annually 0hrs

Phased Approach to AI Literacy

Develop a robust AI literacy program within your organization to ensure appropriate user interaction and maximize AI benefits.

Phase 1: Assessment & Strategy

Conduct an internal audit of current AI messaging and user perception. Develop a tailored AI communication strategy based on organizational goals and user needs.

Phase 2: Targeted Messaging Implementation

Roll out educational content (videos, guides) designed to frame AI appropriately (e.g., as 'machines' for critical tasks, 'tools' for productivity). Focus on clarity and consistency.

Phase 3: Training & Integration

Integrate AI literacy training into employee onboarding and continuous learning programs. Provide practical guidelines for interacting with LLMs based on their framing.

Phase 4: Monitoring & Iteration

Continuously monitor user reliance and attribution metrics. Gather feedback to refine messaging and training, ensuring long-term appropriate AI interaction and trust.

Ready to Optimize Your Enterprise AI Strategy?

Align your AI communication with user perception to enhance efficiency and foster trust.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking