Skip to main content
Enterprise AI Analysis: Opportunities and risks of artificial intelligence in patient portal messaging in primary care

Enterprise AI Analysis

Leveraging AI in Patient Portal Messaging: Opportunities & Critical Risks

A recent study assessed the safety and perceptions of AI-generated draft responses to patient portal messages in primary care. While AI showed promise in reducing physician cognitive burden, a significant portion of errors remained unaddressed, highlighting patient safety risks and the urgent need for improved design and training.

Authored by Joshua M. Biro, Jessica L. Handley, J. Malcolm McCurry, Adam Visconti, Jeffrey Weinfeld, J. Gregory Trafton & Raj M. Ratwani

Key Insights for Healthcare Leaders

The research reveals a dual landscape: AI offers clear benefits in workload reduction, yet introduces significant safety concerns through unaddressed errors. Understanding this dynamic is crucial for responsible AI adoption in patient communication.

Erroneous AI Drafts Submitted Unedited
PCPs Believed AI Drafts Were Safe
PCPs Agreed AI Reduced Cognitive Workload

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Primary care physicians face escalating workloads due to patient portal messages, contributing significantly to burnout. AI-powered tools are emerging as a potential solution to draft responses, aiming to reduce cognitive burden and improve efficiency in inbox management.

However, the rapid adoption of these generative AI tools in healthcare raises critical questions regarding their safety and the ability of human reviewers to detect and correct errors. This study investigates the likelihood of PCPs sufficiently addressing AI-generated inaccuracies.

The study revealed that practicing PCPs frequently missed errors in AI-generated draft responses. Out of four erroneous drafts presented, each error was insufficiently addressed by 13-15 participants (65-75%). Alarmingly, 35–45% of these erroneous drafts were submitted entirely unedited, meaning the errors would have reached the patient.

The likelihood of an error being missed (i.e., reaching a patient after physician review) was found to be statistically significantly greater than zero for all error types tested.

Four categories of errors were identified in AI drafts: Objective Typo (e.g., misspelling medication), Objective Outdated Advice (e.g., incorrect COVID vaccination timing per CDC guidance), and two types of Potentially Harmful Omissions (Blood Clot Risk, DKA Risk).

Underlying cognitive biases such as functional fixedness, confirmation bias, automation complacency, and automation bias were posited as contributing factors to physicians overlooking errors in plausible AI-generated responses.

To mitigate patient safety risks, the study underscores the need for improved design, training, and robust error-detection mechanisms for AI tools in healthcare. This includes developing AI systems capable of identifying their own errors, and designing interfaces that highlight potentially erroneous aspects.

Furthermore, guidelines for human review of AI content are essential at organizational, state, and federal levels to ensure safe and effective integration into clinical practice.

Despite the high rate of missed errors, participating PCPs held a generally favorable view of AI-generated drafts. 95% found them helpful, 80% agreed they reduced cognitive workload, and 75% believed them to be safe to use.

This positive perception, coupled with the urgent need to address physician burnout, indicates a strong appetite for AI solutions, but also highlights the danger of over-reliance and the critical importance of balancing efficiency with safety.

40% of Erroneous AI Drafts Submitted Unedited
66.6% Average Missed Erroneous AI Drafts by PCPs

Enterprise Process Flow: AI-Assisted Patient Messaging

Patient Sends Message
AI Generates Draft
PCP Reviews & Edits
Message Submitted
Patient Receives Response

Critical Oversight: The DKA Risk Scenario

One of the most concerning errors identified involved a patient message indicating symptoms suggestive of Diabetic Ketoacidosis (DKA) in a child (vomiting, excessive thirst, and urination issues). The AI-generated draft suggested a stomach virus and follow-up if vomiting continued, completely failing to address the urgent need for intervention for DKA.

This highlights how AI can misinterpret critical symptoms and, without diligent human review, could lead to severe patient harm due to delayed diagnosis and treatment. This specific error was missed by 75% of participants, and 40% submitted the unedited, potentially dangerous draft.

Feature AI-Assisted Workflow Manual Workflow
Cognitive Workload
  • Reduced initial burden for drafting
  • Potential for increased cognitive load for error detection
  • High, significant contributor to physician burnout
  • Requires full mental effort for drafting
Draft Speed & Efficiency
  • Rapid generation of initial responses
  • Potential for faster inbox management if errors are minimal
  • Time-consuming, contributes to after-hours work
  • Slower response times due to manual effort
Error Potential
  • Introduction of inaccuracies (typos, outdated info)
  • Harmful omissions due to misinterpretation
  • Increased risk from automation bias and complacency
  • Human error, but direct control over content
  • Errors typically due to oversight, not AI hallucination
Patient Safety Implications
  • Highly dependent on diligent human oversight
  • Significant risks if critical errors are missed by PCPs
  • Direct physician responsibility for accuracy
  • Potentially higher inherent safety with careful review
Physician Perception
  • Generally positive (helpful, safe, efficient)
  • Strong appetite for AI to alleviate burden
  • Associated with high workload and dissatisfaction
  • Often seen as a burden rather than an aid

Calculate Your Potential AI Impact

Estimate the hours reclaimed and cost savings your enterprise could realize by intelligently integrating AI into routine communication workflows, considering sector-specific efficiency gains.

Estimated Annual Savings $0
Estimated Annual Hours Reclaimed 0

Strategic AI Implementation Roadmap

Successfully integrating AI into patient portal messaging requires a phased approach, focusing on safety, training, and continuous improvement, as guided by the latest research findings.

Phase 1: Needs Assessment & Pilot Program

Identify high-volume or complex message types, establish clear success metrics (e.g., workload reduction, error rates), and conduct a controlled pilot with a select group of PCPs. Implement robust error tracking from the outset.

Phase 2: System Design & Error Mitigation

Develop or integrate AI systems with inherent error detection capabilities. Design user interfaces that proactively flag potentially erroneous or critical content for PCP review. Ensure AI integrates seamlessly with existing EHR data for context.

Phase 3: Comprehensive Training & Guidelines

Provide PCPs with in-depth training on AI limitations, recognizing automation biases (e.g., functional fixedness, confirmation bias), and establishing clear protocols for reviewing and editing AI-generated drafts. Emphasize patient safety as the paramount concern.

Phase 4: Continuous Monitoring & Iteration

Implement ongoing audit processes for AI-generated responses and physician edits. Establish feedback loops for model retraining and continuous improvement. Create mechanisms for reporting AI-related incidents and learning from near-misses to enhance overall system safety and performance.

Ready to Safely Integrate AI?

Navigating the complexities of AI in healthcare requires expert guidance. Let's discuss a strategy that maximizes efficiency while prioritizing patient safety and clinician well-being.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking