Skip to main content
Enterprise AI Analysis: A Systematic Review on Human Roles, Solutions, and Methodological Approaches to Address Bias in AI

Enterprise AI Analysis

A Systematic Review on Human Roles, Solutions, and Methodological Approaches to Address Bias in AI

Amal Hashky and Eric Ragan, Computer and Information Science and Engineering, University of Florida, Gainesville, United States

People play a significant role in designing, developing, and employing artificial intelligence (AI) systems. They can consider contextual information beyond the scope of AI models, thereby influencing system outcomes. At the same time, people's choices or biases can introduce problems into the systems. This paradoxical scenario, in which people can both introduce and contribute to relieving the inherited machine bias, demands comprehensive and multidisciplinary approaches involving informed human interventions to improve systems' performances and reduce their biases. Researchers across various communities have investigated multifaceted methods to reduce and mitigate bias in AI systems. Regardless of the method, humans are always involved in the debiasing method in one way or another, emphasizing the importance of human intervention during AI systems development.

Our key contributions:

  • We present a detailed taxonomy of solutions to address bias, categorizing the research methodologies and outlining standard evaluation benchmarks.
  • We define humans' roles within the AI lifecycle and classify the extent of their involvement in minimizing data and algorithmic biases.
  • We illustrate differences in the motivations and research methodologies employed to investigate solutions for bias across the ML and HCI fields.

Your AI Initiative: Key Impact Metrics

Understand the foundational research supporting robust and ethical AI development.

0 Published (Year)
0 Total Citations
0 Total Downloads
0 Papers Analyzed

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Bias in AI: A Multidisciplinary Definition

For our review, we define bias in AI as: Systematic and unfair deviations in data or algorithmic outcomes that disadvantage particular individuals or groups. Unlike random errors, bias reflects consistent distortions that can undermine robustness across diverse contexts, perpetuate existing societal inequities, or create new forms of unfairness when embedded in automated decision-making systems. Bias is often shaped and perpetuated by human actors through decisions made during problem formulation, data collection, model design choices, and deployment.

Our Systematic Review Process (PRISMA 2020)

Studies from databases/registers (n = 833)
References removed (n = 161)
Studies screened (n = 672)
Irrelevant studies excluded (n = 497)
Studies assessed for eligibility (n = 175)
Studies excluded (n = 75)
Studies included in review (n = 100)

Solution Types Across AI Lifecycle Stages

AI Lifecycle Stage Principles & Design Guidelines Non-Algorithmic Frameworks Algorithmic Solutions Tools & Techniques
Design Yes Yes Yes Yes
Data Curation Yes Yes Yes Yes
Development Yes Yes Yes Yes
Deployment Yes Yes No No

This table summarizes the prevalence of different solution types across the AI lifecycle stages. Principles and Design Guidelines, Non-Algorithmic Frameworks, and Tools & Techniques are more distributed across phases, while Algorithmic Solutions are heavily concentrated in the Development phase.

72% of papers proposed algorithmic bias reduction methods, highlighting a technical focus.

Case Study: Iterative Data Collection with Feminist Principles

Suresh et al. [147] presented a framework grounded in feminism that supports iterative data collection and annotation through participatory and co-design processes. This approach exemplifies how high-level principles can be translated into structured, operational practices for addressing bias. It prescribes an iterative data collection and annotation workflow that responds to observed model weaknesses and explicitly interrogates framing decisions, such as who is included or excluded in definitions of feminicide. This process defines concrete roles for practitioners, domain experts, and community stakeholders, and establishes procedures for revisiting labels, categories, and data sources over time. It also provides actionable guidance for prioritizing marginalized groups and focusing on intersectional identities.

Human Roles in Bias Mitigation

Humans play a significant role in designing, developing, and employing AI systems. Our analysis identified distinct human roles contributing to bias mitigation: AI/ML Practitioners (97% of solutions), Data Annotators (9%), Policymakers (16%), Domain Experts (5%), and End Users (4%). Each role contributes at different stages of the AI lifecycle, from problem definition to deployment and oversight.

Levels of Human Involvement in Debiasing

Level Characteristics Impact/Examples
Low-Level Limited to one-time or infrequent interventions, system operates autonomously after initial setup. Data preprocessing (aggregation, resampling), improved clustering algorithms. [14, 16, 18, 34, 46, 47, 50, 62, 71, 76, 86, 91, 117, 120, 122]
Medium-Level Requires regular human input, often involving AI/ML practitioners making decisions aligning with framework standards. Employing algorithmic frameworks for fairness and robustness, iteratively refining datasets, adjusting algorithms to meet standards. [21, 85, 93, 109, 118, 158, 165, 167, 172]
High-Level Continuous human interactions, high-impact decisions, multi-faceted impact across AI lifecycle, includes institutional policies, organizational practices. Principles and design guidelines, non-algorithmic frameworks, interactive tools that require continuous monitoring and complex decision-making. [43, 44, 63, 119, 130, 132, 144, 153, 164]

Human involvement varies significantly, from singular interventions to continuous, high-impact decisions affecting system design, ethics, and policy.

HCI vs. ML: Different Motivations

The Machine Learning (ML) community tends to prioritize robustness (73% of papers) to enhance model performance in real-world scenarios. In contrast, the Human-Computer Interaction (HCI) community places greater emphasis on fairness (60% of papers) to reduce social and ethical biases and ensure equal opportunities across individuals and groups. Both communities contribute to bias mitigation but from distinct vantage points.

The Role of Explainable AI (XAI)

While XAI techniques do not directly reduce bias, they play an important complementary role. XAI supports interpretation, bias detection, and human-in-the-loop oversight by making model behavior more transparent. This enables developers, domain experts, and policymakers to understand algorithmic biases and guide follow-up actions, such as adjusting model design or influencing system decisions at deployment time. It serves as an enabling tool for governance and human oversight, not a standalone bias-reducing solution.

Challenges in Addressing Bias

Addressing bias in AI faces several significant challenges:

  • Complexity of Bias: Bias can originate from various sources and manifest at different stages of the AI lifecycle (data collection, algorithmic design, deployment conditions), requiring distinct mitigation strategies. Identifying and measuring bias is inherently difficult, often deeply embedded and not always apparent.
  • Lack of Real-World Evaluations: Most research relies on controlled, data-based or user-based experiments, which fail to capture the full scope of real-world complexities. Only 2% of surveyed papers reported in-situ evaluations, despite their accuracy in realistic conditions. Logistical, ethical, privacy, and organizational challenges hinder real-world testing.
  • Limited Interdisciplinary Collaboration: While bias mitigation is multidisciplinary, significant gaps remain. ML focuses on algorithmic solutions, while HCI offers human-centered perspectives. Bridging this gap requires broader dialogue and collaboration to translate theoretical solutions into practical applications.

Quantify Your Enterprise AI Savings

Estimate the potential efficiency gains and cost reductions for your organization by implementing bias-aware AI solutions.

Estimated Annual Savings $0
Annual Hours Reclaimed 0

Your Roadmap to Ethical & Robust AI

A phased approach to integrating human-centered bias mitigation into your enterprise AI strategy.

Phase 1: Discovery & Assessment

Conduct a comprehensive audit of existing AI systems and data pipelines to identify potential sources of bias. Define ethical guidelines and fairness metrics relevant to your business context.

Phase 2: Strategy & Design

Develop a human-centered AI strategy, integrating participatory design principles. Design bias mitigation frameworks, emphasizing diverse stakeholder involvement in data curation and model development.

Phase 3: Implementation & Training

Implement algorithmic solutions and interactive tools. Train AI/ML practitioners, data annotators, and other stakeholders on best practices for bias reduction and ethical AI development.

Phase 4: Monitoring & Governance

Establish continuous monitoring mechanisms and governance frameworks. Ensure ongoing human oversight, explainable AI integration, and a feedback loop for iterative improvement and policy refinement.

Ready to Build Fairer AI?

Leverage expert insights to proactively address bias and enhance the trustworthiness of your AI systems. Book a consultation with our AI strategists.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking