ENTERPRISE AI ANALYSIS
Uses of generative Al by non-clinician staff at an academic medical center
This study provides a quantitative analysis of real-world chat tool use among non-clinician healthcare staff at an academic medical center. It reveals that 98% of users are non-clinicians, primarily leveraging LLMs for administrative tasks such as email and document writing (53.9%), text manipulation (9.1%), and brainstorming (6.7%). A critical finding is that 5.9% of interactions involved clinical decision-making, often from non-clinician users, highlighting the need for tailored training and governance.
Executive Impact & Key Metrics
Non-clinician staff are the primary users of generative AI in healthcare, dominating usage with administrative tasks. However, off-label use for clinical decision-making by non-clinicians represents a significant risk. Targeted training, refined governance, and enhanced evaluation frameworks are essential to maximize benefits while mitigating risks.
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
Diverse Non-Clinician Roles Leveraging AI
The study highlights that non-clinician staff, including administrative assistants, case managers, and interpreters, constitute the vast majority (98%) of users. This demonstrates a broad applicability beyond traditional clinical roles, enabling efficiency across diverse administrative and support functions within the healthcare system.
"Our analysis of 30,503 chat threads moves beyond prior survey-based or pre-categorized approaches, revealing how non-clinicians adopt and integrate these LLM tools."
Enterprise Process Flow
| Category | Observed Usage | Expected/Ideal Usage |
|---|---|---|
| Administrative Tasks |
|
|
| Clinical Decision Support |
|
|
| Non-Work Related Queries |
|
|
| Language Translation |
|
|
| Coding Assistance |
|
|
Mitigating Risks: Clinical Decision-Making by Non-Clinicians
A significant risk identified is the use of LLMs by non-clinicians for clinical decision-making, comprising 5.9% of interactions. This 'off-label' use poses risks to patient safety and necessitates strict governance. Role-specific training and clear guidelines are crucial to ensure AI tools are used within appropriate scopes of practice.
"Despite the majority of user roles being non-clinical, a significant proportion of prompts were related to clinical decision making."
| Recommendation | Benefit | Implementation Detail |
|---|---|---|
| Real-time Dashboards |
|
|
| Role- & Department-Specific Guidance |
|
|
| Automate Validated Use Cases |
|
|
Expanding MedHELM Taxonomy for Non-Clinical Use
The study revealed several non-clinical uses (Email/Document Writing, Coding, Technical Support, Text Manipulation, Brainstorming) not adequately captured by existing frameworks like MedHELM. This highlights the need to update AI task taxonomies to reflect real-world enterprise usage more accurately, ensuring better classification and understanding of diverse applications.
"This gap illustrates limits in existing frameworks' ability to capture real-world LLM uses among non-clinician staff. We propose the addition of these tasks to MedHELM version 2 under the 'Administration & Workflow' category."
Advanced ROI Calculator
Estimate your potential savings and efficiency gains by deploying AI solutions tailored to your enterprise.
Implementation Roadmap
Our phased approach ensures a smooth and effective integration of AI into your existing workflows.
Phase 1: Secure Deployment & Monitoring
Deploy a secure, HIPAA-compliant LLM chat tool with real-time usage dashboards to identify patterns and emerging risks. Establish initial governance policies.
Phase 2: Role-Specific Training & Governance
Develop and implement targeted training programs and educational resources for different non-clinician roles, emphasizing appropriate use and preventing 'off-label' activities, particularly in clinical decision-making.
Phase 3: Workflow Automation & Taxonomy Expansion
Identify and automate high-value, repetitive administrative tasks (e.g., prior authorizations, patient education documents) using validated AI tools. Expand internal AI task taxonomies to accurately reflect diverse non-clinical applications.
Phase 4: Continuous Evaluation & Optimization
Regularly review usage data, policy effectiveness, and user feedback. Iterate on training, governance, and tool capabilities to maximize efficiency gains and ensure patient safety and data security.
Ready to Transform Your Enterprise?
Connect with our AI specialists to develop a bespoke strategy that drives innovation and efficiency.