Skip to main content
Enterprise AI Analysis: The explainable Al dilemma under knowledge imbalance in specialist Al for glaucoma referrals in primary care

Enterprise AI Analysis: The explainable Al dilemma under knowledge imbalance in specialist Al for glaucoma referrals in primary care

Unlocking Precision in Glaucoma Referrals: The Explainable AI Dilemma

Primary eye care providers refer glaucoma patients using their clinical experience and context. Specialized Artificial Intelligence (Al) excels in referrals trained on clinical data but relies on assumptions that may not hold in practice. To address this knowledge imbalance, we proposed using Al explanations to help providers assess Al predictions against their experience and improve referrals. We developed Al models to identify patients needing urgent referrals from routine eye care data and created intrinsic and post-hoc explanations. In a user study with 87 optometrists, human-Al teams achieved higher accuracy (60%) than humans alone (51%), but explanations did not enhance performance. Instead, they introduced uncertainty about when to trust Al. Post-hoc explanations increased over-reliance on incorrect Al recommendations, and both explanation types contributed to anchoring bias, making participants align more closely with Al referrals than without explanations. Overall, human-Al teams still underperformed the Al alone (80% accuracy). Challenges persist in designing effective support mechanisms to surpass Al performance while preserving human agency.

Executive Summary: AI in Glaucoma Referral

This analysis focuses on the integration of explainable AI (XAI) in glaucoma referral processes for primary eye care providers. The study highlights the potential benefits and challenges of AI assistance in decision-making, particularly concerning knowledge imbalance and user trust.

0% Human-AI Accuracy
0% Human Alone Accuracy
0% AI Alone Accuracy
↑0% Accuracy Boost (Human-AI vs Human)

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Explore the scientific approach and experimental design used to evaluate AI support and explanations.

Study Workflow: Human-AI Teaming Evaluation

Review Vignettes (Human Only)
AI Support Introduced (Random Order)
Referral Decision & Confidence Rating
AI Trust & Integration Rating
Effort & Challenge Rating
Preferred Interaction Selection

AI Models & Explanation Types

The study deployed two primary AI models: a Vision Transformer (ViT) for deep learning and a risk scorecard model. The ViT model, a 'black-box' system, uses visual and numerical data to predict glaucoma surgery risk. Its explanations were generated post-hoc using SHAP values, highlighting key features like PSD, RNFL thickness, and MD. The risk scorecard model, by contrast, is intrinsically explainable, providing a transparent numerical breakdown of contributing factors to a referral score. This allowed participants to see a clear calculation leading to the AI's recommendation. Both models were calibrated to predict the urgency of glaucoma referrals: urgently (within 3 months), in the near future (3-12 months), or no current referral needed.

Understand the main outcomes regarding human-AI performance, explanation effectiveness, and cognitive biases.

Human-AI Performance vs. AI Alone

While human-AI teams improved over human-alone, they significantly underperformed compared to AI alone.

80% AI Alone Accuracy (Across Vignettes)

Impact of Explanation Types on Key Metrics

A comparative overview of how different AI support types affected accuracy, over-reliance, and AI deviation score.

Metric Human Only No Explanation (AI) Feature Importance (AI+XAI) Scoring-based (AI+XAI)
Accuracy 51% 58% 60% 62%
Over-reliance (incorrect AI predictions) N/A 77% 92% 85%
AI Deviation Score (lower is less adjustment/more alignment) N/A 0.33 0.24 0.22
  • Human-AI teams improved accuracy over human-alone (51% to 60-62%).
  • Post-hoc explanations (Feature Importance) led to increased over-reliance on incorrect AI recommendations (92%).
  • Both explanation types contributed to anchoring bias, as seen by smaller AI deviation scores.

Discover the practical implications for clinical practice and directions for future research in human-AI collaboration.

The Explainable AI Dilemma

Explanations did not universally enhance performance, and in some cases, exacerbated biases like over-reliance.

The study reveals a critical dilemma: while AI explanations are intended to improve trust and performance, they did not universally enhance accuracy and, in some cases, led to increased over-reliance on incorrect AI recommendations. This suggests that simply providing explanations without careful design for integration and context may not be sufficient. The 'knowledge imbalance' between specialized AI and generalist providers means that providers struggle to discern when to trust or override AI, even with explanations.

Optimizing Trust Calibration in AI-Assisted Diagnostics

Future work needs to focus on designing support mechanisms that enable proper reliance and integrate human expertise.

Enhancing Glaucoma Referral Workflows

Challenge: Primary eye care providers need to accurately identify high-risk glaucoma patients for timely referral, but current AI assistance, even with explanations, still underperforms compared to AI alone and can introduce cognitive biases.

Solution: Develop advanced human-AI interaction designs that move beyond static explanations. This includes staged decision-making processes where clinicians first assess a case, then review AI recommendations, and finally make a decision. Integrate interactive explanations that allow for querying and exploration of AI reasoning, fostering a deeper mental model of the system. Implement feedback loops and longitudinal training to help providers calibrate trust over time.

Outcome: Improved overall accuracy for human-AI teams, with reduced over-reliance on incorrect AI predictions and better integration of human clinical expertise. This will lead to more timely and appropriate referrals, preventing irreversible vision loss and optimizing specialist resource allocation.

Calculate Your Potential ROI with Explainable AI

Estimate the potential ROI for integrating explainable AI into your clinical referral pathways.

Estimated Annual Savings $0
Productive Hours Reclaimed Annually 0

Your AI Implementation Roadmap

A phased approach to integrating AI into clinical workflows.

Phase 1: Needs Assessment & Data Preparation

Identify specific referral bottlenecks, gather and preprocess relevant patient data, and define success metrics.

Phase 2: Model Development & XAI Integration

Train and validate specialized AI models for referral prediction. Integrate intrinsic and post-hoc XAI methods tailored for clinicians.

Phase 3: Pilot Deployment & User Training

Conduct pilot studies with a small group of clinicians. Provide comprehensive training on AI interpretation, explanation usage, and trust calibration.

Phase 4: Iterative Refinement & Full-Scale Rollout

Collect feedback from pilot, refine models and explanations. Gradually roll out to broader clinical teams, ensuring ongoing monitoring and support.

Ready to Transform Your Referral Process?

Book a strategic session with our AI experts to discuss how explainable AI can enhance your diagnostic accuracy and operational efficiency.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking