Skip to main content
Enterprise AI Analysis: A Conditional Companion: Lived Experiences of People with Mental Health Disorders Using LLMs

HCI in Mental Health

A Conditional Companion: Lived Experiences of People with Mental Health Disorders Using LLMs

Large Language Models (LLMs) are increasingly used for mental health support, yet little is known about how people with mental health challenges engage with them, how they evaluate their usefulness, and what design opportunities they envision. We conducted 20 semi-structured interviews with people in the UK who live with mental health conditions and have used LLMs for mental health support. Through reflexive thematic analysis, we found that participants engaged with LLMs in conditional and situational ways: for immediacy, the desire for non-judgement, self-paced disclosure, cognitive reframing, and relational engagement. Simultaneously, participants articulated clear boundaries informed by prior therapeutic experience: LLMs were effective for mild-to-moderate distress but inadequate for crises, trauma, and complex social-emotional situations. We contribute empirical insights into the lived use of LLMs for mental health, highlight boundary-setting as central to their safe role, and propose design and governance directions for embedding them responsibly within care ecosystem.

Executive Impact & Key Findings

This study examined how people with mental health challenges engage with LLMs for support. Through 20 semi-structured interviews, we highlighted lived experiences that illuminate both the motivations for use and the limitations of these systems. Participants valued LLMs not only for their immediacy and non-judgmental stance, but also for the ability to engage in self-paced disclosure, to reframe and structure thoughts, and to experience a sense of relational engagement. Simultaneously, they drew clear boundaries by emphasizing that LLMs were inadequate for crises, trauma, or socially and emotionally complex situations that require human empathy and judgment. Drawing from these findings, we propose design and governance strategies that position LLMs as supplemen-tary tools within broader care systems rather than as substitutes for therapists. By highlighting how users define the role of LLMs, this study contributes to the development of responsible, user-focused AI applications in mental health care.

0 Interviews Conducted
0 Apps for Mental Health
0 UK Median Wait Time

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

User Engagement & Motivation (RQ1) Boundaries & Limitations (RQ2) Design & Governance (RQ3)
Instant Support for Acute Stress

Participants frequently contrasted the instant availability of LLMs with the long wait times for therapy... Others valued the effortlessness of access, emphasizing that LLMs could be consulted anytime and anywhere without appointments.

Safe Space Confidential Expression

LLMs appear to embody key elements of the therapeutic alliance by offering unconditional positive regard and accepting individuals without judgment. Several participants described their greater willingness to disclose sensitive thoughts to LLMs than to human clinicians and therapists.

Self-Paced Disclosure Benefits

Control Timing
Set Scope
Revisit Conversations
Type at Own Tempo
Reframing Cognitive Structuring

Participants highlighted their use of LLMs to de-escalate overwhelming thoughts by turning them into structured, step-by-step reflections... This includes cognitive restructuring, where therapists help clients notice and change unhelpful thoughts.

Companionship Friend-like Presence

Several participants described their interactions with LLMs in explicitly relational terms, often anthropomorphizing the system as a friend... Others described ChatGPT as a consistent and reliable confidant.

LLM vs. Human Support

Feature LLM Human Therapist
Crisis/Emergency
  • Inadequate for crises/trauma
  • Indispensable
Social/Emotional Complexity
  • Struggles with nuances
  • Brings irreplaceable resources
Relational Depth
  • Surface-level, ephemeral
  • Builds deep connection

Participants articulated clear limits on when LLMs were helpful and when they were not, and these judgments were strongly shaped by their prior experience with in-person therapy.

Crisis = Human Professional Care Needed

Participants' consensus that LLMs were inadequate in moments of acute crisis or severe trauma... LLMs were valued for mild-to-moderate difficulties, but in moments of acute crisis, participants saw professional human support as indispensable.

The 'Better Than Nothing' Reality

Our findings illustrate an important reality of socio-technical systems in HCI: an imperfect system is better than nothing. Some participants in our investigation generally preferred talking to a real therapist who is well-trained, immediately available, non-judgmental, a good companion, and someone who helps them with cognitive structuring and reframing. However, they also reported challenges in finding a therapist and cited long waiting times. Considering these waiting times, our participants' use of ChatGPT for mental health was rational.

Safeguards Guidelines, Escalation

Participants consistently called for stronger safeguards, not bans, to manage risk. They stressed that completely banning LLMs for mental health use would cut off an important support option, but that stronger guidance and well-defined escalation pathways are still crucial.

Shorter Outputs Multimodal & Scaffolding

Participants frequently described long, text-heavy responses as overwhelming, especially when they experienced anxiety or ADHD. They proposed shorter multimodal outputs and structured prompts that mimic the scaffolding of therapy.

User Control Selective Memory, Tone Adaptation

Participants wanted more control over continuity and tone in interactions. While some appreciated the ability of LLMs to remember past exchanges, they worried about overdependence and privacy. They requested selective memory and customizable relational styles.

LLM Integration into Care Ecosystem

Connect with Wearables
Link to Apps
Share Clinical Notes
Complement Professional Treatment
Adjunct Tool Support, Not Replacement

Participants expressed a desire for closer collaboration between LLMs and human therapists. They suggested that AI could act as a communication bridge by capturing thoughts between sessions and sharing them with professionals.

Calculate Your Potential AI ROI

Estimate the efficiency gains and cost savings from implementing AI solutions tailored to your enterprise needs, based on industry benchmarks and operational data.

Projected Annual Savings $0
Hours Reclaimed Annually 0

Implementation Roadmap

A typical phased approach to integrating AI into your enterprise, ensuring a smooth transition and measurable impact.

Phase 1: Discovery & Strategy

Conduct a thorough analysis of current workflows, identify key AI opportunities, and define a strategic roadmap aligned with business objectives.

Phase 2: Pilot & Proof-of-Concept

Develop and test a pilot AI solution on a small scale to validate its effectiveness, gather feedback, and refine the approach.

Phase 3: Integration & Scaling

Integrate the AI solution into existing systems, expand its deployment across relevant departments, and establish performance monitoring.

Phase 4: Optimization & Future-Proofing

Continuously monitor AI performance, iterate based on data and feedback, and explore new AI capabilities for long-term growth.

Ready to Transform Your Enterprise with AI?

Book a personalized 30-minute consultation with our AI experts to explore how these insights can be applied to your specific business challenges and opportunities.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking