Enterprise AI Analysis
Young people's perceptions and recommendations for conversational generative artificial intelligence in youth mental health
The youth mental health crisis demands innovative solutions. Conversational generative AI (genAI) chatbots, like the Mental health Intelligence Agent (Mia), hold potential for enhancing triage, assessment, and self-management in youth mental health services. However, young people's perspectives on these tools remain largely unexplored. This study, leveraging co-design workshops with 32 young Australians (aged 18-30), investigated their perceptions and developed key recommendations for integrating genAI chatbots into mental health care. Four critical themes emerged: the need to humanise AI while safeguarding human connection, the demand for transparency in system functioning and data use, defining appropriate roles and touchpoints for genAI (navigator, assessor, educator), and enabling user control and safety through customisation and robust data governance. These insights underscore the complexity of deploying genAI in sensitive contexts, calling for ethical design, clear communication, and ongoing user involvement to ensure these tools support, rather than diminish, human-delivered care.
Executive Impact at a Glance
Key metrics from the study highlight the deep engagement and rich insights gathered from young people, forming a robust foundation for ethical and effective AI integration.
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
Theme 1: Humanising AI without dehumanising care
Young people expressed a desire for genAI chatbots to exhibit human-like empathy, advocating for emotionally resonant, supportive, and personable tones. However, this was coupled with a strong concern that AI should support rather than replace human clinicians, reflecting doubts about AI's capacity to truly comprehend complex mental health experiences. Key requirements include accessible, non-clinical language, nuanced understanding of youth-specific and multilingual mental health terminology, and sensitive handling of confronting topics like suicide. Emphasised was the need for high standards in information recall, case formulation accuracy, and intersectionality sensitivity, all while maintaining the centrality of human connection in care delivery.
Theme 2: I need to know what's under the hood
Participants demanded comprehensive transparency regarding genAI chatbots' operations, including their evidence base, decision-making processes, and data usage and storage practices. Accuracy was paramount, expecting tools to aptly interpret user inputs, draw appropriate inferences, and provide reliable, evidence-grounded outputs. Young people valued functionality that provided insight into the AI's internal workings, such as explanations of interpretations and reasoning processes, enabling them to make independent judgments. While transparency was largely seen as empowering, a tension was noted where revealing potentially negative AI-generated insights could be disempowering. Solutions included clear communication during onboarding about system operations and scaling the knowledge base with diverse global research.
Theme 3: Right tool, right place, right time?
This theme explored the optimal integration of genAI chatbots within the youth mental health ecosystem, identifying three primary roles: navigators, assessors, and educators. As navigators, they would guide young people to appropriate services based on needs (location, cost). As assessors, they would interpret mental health information, identify critical signs, and generate personalized recommendations. As educators, they would inform users about treatments, service processes, and research evidence. Key touchpoints for deployment included pre-intake (self-screening), intake (assessments), preparing for clinical sessions, moments of crisis (risk assessment, distress management), and ongoing service engagement (psychoeducation, symptom tracking). This strategic positioning aims to improve access, efficiency, and continuity of care.
Theme 4: Making it mine on safe ground
Sustained engagement with genAI chatbots hinges on user choice and safety. Young people emphasized maximum choice over interaction modalities (text/voice), access points (app/web), conversation structure (guided/open-ended), pacing, and level of information detail (opt-in for more). Critical safety concerns included data privacy (what is collected, stored, accessed, retained, used for retraining) and interaction safety (AI's ability to detect and appropriately respond to user risk, e.g., self-harm or violence, by directing to professionals). A key distinction was made: raw conversation data should remain private, but AI-generated clinical insights (summaries, risk flags) should be accessible to clinicians to ensure protective functions and coordinated care without compromising user autonomy.
Enterprise Process Flow: Co-designing Mia for Youth Mental Health
Case Study: Transforming Mia from Professional to Consumer Tool
The Mental health Intelligence Agent (Mia) was originally designed as a tool for health professionals, aiding in triage, assessment, and treatment planning. Through the co-design process reported in this study, Mia is being reconceptualized for direct consumer use, informed by the nuanced perspectives of young people. This transformation emphasizes the need for genAI that not only scales expertise but also integrates seamlessly into youth mental health services while prioritizing human connection, transparency, and user safety. The iterative development, guided by lived experience, ensures Mia's relevance and trustworthiness as a complementary tool.
Calculate Your Potential AI Impact
Estimate the efficiency gains and cost savings your enterprise could achieve by strategically implementing AI solutions informed by user-centric design principles.
Strategic Implementation Roadmap
A phased approach to integrate GenAI chatbots in youth mental health, prioritizing ethical design, user engagement, and sustainable impact.
Phase 1: Foundational Design & Ethics
Establish a robust ethical framework, focusing on humanizing AI, ensuring transparency, and preventing dehumanization of care. Co-design principles will guide initial design to build trust and define AI's supportive role, not as a replacement for human connection.
Phase 2: Co-Design & Prototyping
Develop prototypes (like Mia) with iterative user testing, integrating young people's feedback on language, empathy, and functionality. Define AI's specific roles (navigator, assessor, educator) and appropriate touchpoints across the care journey, ensuring personalized experiences.
Phase 3: Integration & Governance
Integrate genAI chatbots into existing service systems with clear communication about capabilities and limitations. Implement granular data governance distinguishing between raw user data (private) and AI-generated insights (shared with clinicians for safety). Establish clinician intervention pathways for risk detection.
Phase 4: Continuous Evaluation & Adaptation
Monitor performance against clinical standards and user satisfaction, with mechanisms for sustained lived experience input. Adapt the system based on ongoing feedback, research, and evolving ethical considerations to ensure long-term effectiveness, equity, and trust.
Ready to Innovate Your Mental Health Services with AI?
Leverage these insights to develop ethical, effective, and user-centric GenAI solutions tailored for youth mental health. Our experts are ready to guide you.