Skip to main content
Enterprise AI Analysis: From paternalism to pixels: gender and racial stereotypes in Al-generated visual representations of doctor-patient relationships

Enterprise AI Analysis

From Paternalism to Pixels: Unmasking AI's Stereotypes in Doctor-Patient Relationships

This study delves into how Generative AI (GenAI) text-to-image models, increasingly used in healthcare, reproduce and amplify gendered and racialized stereotypes within the Doctor-Patient Relationship (DPR). Analyzing 200 images from OpenAI's DALL-E 3 (2024) and GPT Image 1 (2025), depicting DPR in both paternalistic (1960s) and participatory (post-2000) eras, the research found overwhelming depictions of physicians as White men and patients as Black and/or female, with minimal Asian representation. These visual outputs reinforce a hierarchy where medical authority aligns with Whiteness and masculinity, amplifying existing power asymmetries rather than challenging them. The study concludes that GenAI biases are sociotechnical expressions of medical paternalism, necessitating bias-aware design and critical engagement with the visual culture of care.

Executive Impact: Key Metrics & Findings

Understand the quantifiable insights derived from our analysis of AI's representational biases in healthcare imagery.

0 AI Images Analyzed
0 Gwet's AC1 Coefficient (Inter-rater Reliability)
0 Percent of GPT Image 1 (2025) current-era male patients depicted as Black
0 Percent of GPT Image 1 (2025) current-era doctors depicted as White

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

The quantitative analysis of 200 images revealed systematic biases in gender, ethnicity, and age representation across doctor and patient roles. Both DALL-E 3 and GPT Image 1 models consistently depicted physicians as predominantly White men and patients as Black and/or female. In the current era (post-2000), DALL-E 3 showed women predominating as doctors (30/50) but still overwhelmingly White (80%), with Black and Asian doctors almost absent. Patients in this era were frequently Black (66%), especially Black women (79%). For GPT Image 1, current-era doctors were overwhelmingly male (47/50) and White (96%), with very minimal Asian representation (4%). Patients were again predominantly Black (72%), and all were young adults. The 1960s era showed even more pronounced disparities, with nearly all doctors depicted as White men and patients as predominantly Black (76% for DALL-E 3, 44% for GPT Image 1) and often female. These findings confirm the persistence and amplification of traditional hierarchies regardless of historical period or model generation, with statistical significance across multiple demographic comparisons.

The hermeneutic analysis interpreted the visual representations not as mere technical glitches but as 'archaeological artifacts' reflecting deeper historical, cultural, and algorithmic influences. The consistent depiction of White male physicians and Black/female patients visually encodes symbolic hierarchies of care and authority. This aligns with Rancière's 'consensual reality' where AI reconstructs historical power dynamics. The near-total absence of Asian physicians, despite their significant presence in real-world healthcare, highlights a polarization where authority is exclusively White and male. The overrepresentation of Black and female patients as passive or pathologized figures reinforces historical associations of minority groups with illness and vulnerability, perpetuating a self-reinforcing cycle. This underscores how AI systems mirror linguistic and cultural hierarchies, acting as 'hypomediators' that funnel perceptions towards normative, univocal roles, effectively re-inscribing a 'master-servant' dynamic into technologically mediated imagery.

The study's findings have critical implications for the uncritical adoption of AI-TIG models in healthcare. The reproduction and amplification of gendered and racialized biases undermine efforts towards equitable, patient-centered care. The persistence of paternalistic imagery, despite the shift to participatory models, symbolically reinforces outdated power structures. This visual bias, rooted in training data and prompt interpretation, can shape perceptions, decision-making, and professional aspirations, particularly for women and racial minorities. Addressing these inequities requires not only technical advances like diversified datasets and bias mitigation pipelines but also a fundamental rethinking of how medicine conceptualizes and visually represents physicians and patients. Moving forward, bias-aware design, transparency, and critical engagement with the visual culture of care are essential to ensure AI systems contribute to more inclusive and just healthcare, rather than perpetuating systemic inequalities.

76 Percent of 1960s patients depicted as Black in DALL-E3 outputs

AI-TIG Bias Propagation Flow

Historical Stereotypes in Training Data
Prompt Interpretation & Amplification
AI-Generated Visuals Reinforce Biases
Perpetuates Structural Hierarchies
Impacts Healthcare Perceptions
Feature DALL-E3 (2024) GPT Image 1 (2025)
Physician Gender Bias (Male)
  • Predominant in 1960s
  • Women predominate in current era doctors (30/50)
  • Overwhelmingly male (47/50 men, 3 women)
  • Feminization not realized
Physician Race Bias (White)
  • Overwhelmingly White (100% men, 80% women)
  • Black/Asian doctors almost absent
  • Overwhelmingly White (96% men, 100% women)
  • No Black doctors depicted
Patient Gender/Race Bias
  • 1960s: predominantly Black/female patients
  • Current: Black/female patients frequent (33/50 Black, 15/19 women Black)
  • Consistently Black/female patients
  • Current: 73% male patients Black, 60% female patients Black
Historical Period Impact
  • Paternalistic 1960s show greater disparities (all 1960s doctors male, White)
  • Disparities persist across eras
  • 1960s doctors almost exclusively White men (49/50)
Bias Mitigation Effectiveness
  • Limited evidence of effective bias reduction, especially in 1960s context
  • Newer model shows persistent or exacerbated biases, especially for Asian physicians (near-total absence)

Case Study: The 'White Male Doctor, Black Female Patient' Archetype

Our analysis consistently reveals a strong visual archetype: the White male doctor treating a Black and/or female patient. This pattern mirrors historical power dynamics in healthcare, where White men traditionally held authority, and marginalized groups experienced subordinate roles. AI-generated imagery amplifies this by frequently depicting Black individuals, particularly women, in roles of vulnerability and illness, while White men are consistently portrayed as authoritative medical professionals. This reinforces existing structural inequities rather than challenging them, highlighting how AI models can re-inscribe harmful stereotypes into visual culture.

Quantify Your AI Transformation

Use our interactive calculator to estimate the potential time and cost savings for your enterprise by implementing bias-aware AI solutions.

Estimated Annual Savings $0
Hours Reclaimed Annually 0

Your Roadmap to Equitable AI

A phased approach to integrate bias-aware AI solutions and foster inclusive healthcare representations.

Phase 1: Bias Audit & Data Diversification

Conduct comprehensive audits of AI-TIG models for gender, racial, and age biases. Implement strategies to diversify training datasets with ethnically and gender-balanced representations across various professional roles and patient scenarios. Focus on underrepresented groups in positions of authority.

Phase 2: Contextual Prompt Engineering & Model Fine-tuning

Develop and integrate bias-aware prompt engineering guidelines to generate more equitable imagery. Fine-tune GenAI models with explicit fairness mechanisms and reward systems that prioritize diverse, non-stereotypical outputs in relational contexts like the DPR. Address the 'markedness' of subordinated identities.

Phase 3: Integration into Healthcare Education & Media

Pilot AI-generated imagery in medical education, patient communication materials, and public health campaigns. Critically evaluate visual outputs with diverse stakeholders to ensure they challenge, rather than reinforce, existing power asymmetries. Develop guidelines for ethical use in clinical and public contexts.

Phase 4: Continuous Monitoring & Iterative Improvement

Establish long-term monitoring systems to track bias evolution across model generations and diverse applications. Create feedback loops for continuous model refinement and dataset updates. Foster ongoing research into the sociotechnical impacts of AI-generated visual culture on perceptions of care.

Ready to Build a More Equitable AI Future?

Our team specializes in auditing, developing, and deploying AI solutions that are fair, transparent, and aligned with your ethical standards. Let's work together to transform your enterprise's AI landscape.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking