Skip to main content
Enterprise AI Analysis: AI Across Cultures: Co-Designing Equitable and Culturally Grounded Futures

Enterprise AI Analysis

AI Across Cultures: Co-Designing Equitable and Culturally Grounded Futures

Authors: Alfred Malengo Kondoro, Jaydon Farao, Cynthia Jayne Amol, Kato Steven Mubiru, Bronson Bakunga, Gilbert Kiplangat Korir, Chris Chinenye Emezue

Executive Impact Summary

Artificial Intelligence (AI) and Human-Computer Interaction (HCI) are often framed as universal; yet, their foundations are shaped by the data, infrastructures, and policies of high-resource contexts. This leaves low-resource languages and culturally diverse communities underrepresented, amplifying inequities in access, adoption, and governance. This CHI 2026 workshop, AI Across Cultures: Co-Designing Equitable and Culturally Grounded Futures, brings together HCI researchers, AI/NLP practitioners, policymakers, and community leaders to reimagine AI as a sociotechnical system co-created with and for diverse contexts. We focus on aligning design methodologies, policy frameworks, and cultural philosophies to foster pluralistic, equitable, and sustainable AI ecosystems. Through case studies, participatory activities, and cross-border dialogue, the workshop will surface practices, challenges, and frameworks that integrate cultural identity, linguistic diversity, and community priorities into AI. The outcomes will include collaborative mappings, actionable guidelines, and a synthesis paper to guide future culturally grounded HCI and AI research.

0 Expected Participants
0 Intensive Session
0 Core Questions

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Redefining AI & HCI for Diverse Cultures

In Human-Computer Interaction (HCI), technologies are understood not merely as tools but as sociotechnical systems that both shape and are shaped by people's practices, values, and contexts. Artificial Intelligence (AI) is often framed as a universal enabler, celebrated for expanding access to knowledge, bridging communication across borders, and supporting new forms of participation in digital societies [14, 17].

Yet the infrastructures, datasets, and policies that power contemporary AI are predominantly drawn from high-resource contexts and do not always reflect the realities of communities whose languages and practices are underrepresented online. This is acutely visible in African language technologies, where efforts consistently document gaps in data, tooling, evaluation, and deployment for low-resource languages, alongside the practical constraints of computing and infrastructure. Recent work further highlights the linguistic and sociotechnical challenges of building robust resources and systems for African languages, including morphology, corpus construction, and pretraining strategies [1, 22].

Early approaches to Artificial Intelligence (AI) and Human-Computer Interaction (HCI) have often assumed that technologies could be designed as universal solutions, drawing primarily from the data, infrastructures, and policies of high-resource contexts. While this has driven remarkable technical progress, it has also meant that the needs, practices, and languages of communities with fewer resources have been underrepresented in both design and governance [1, 2, 5].

This workshop, AI Across Cultures: Co-Designing Equitable and Culturally Grounded Futures, positions cultural diversity and low-resource contexts as essential starting points for rethinking how AI and HCI intersect. We take AI not as a one-size-fits-all technology, but as a sociotechnical construct that must evolve with and through the cultural and linguistic contexts in which it is deployed [7, 23].

Goals for Culturally Grounded AI Futures

The workshop aims to convene an interdisciplinary community of HCI researchers, AI practitioners, policymakers, and community representatives to reimagine how AI can be developed in ways that are inclusive, ethical, and culturally grounded. Rather than treating AI as a universal technology, we frame it as a sociotechnical system that must evolve in dialog with diverse cultural, linguistic, and societal contexts.

Our specific goals are to: (1) Map practices and challenges in adapting AI to culturally diverse and low-resource contexts, identifying successes, failures, and gaps. (2) Develop shared frameworks for co-designing AI and governance, emphasizing how sociotechnical approaches can sustain inclusive innovation. (3) Formulate actionable guidelines for HCI researchers and practitioners to embed cultural, linguistic, and community perspectives into AI design. (4) Foster long-term collaboration by seeding a global network of researchers, practitioners, and policymakers committed to equity-centered AI ecosystems. (5) Create a collective artifact that captures speculative visions of futures where AI supports cultural preservation, social justice, and community empowerment.

To guide discussion and collaboration, the workshop will engage participants in reflecting on key questions such as: How can AI be adapted to support indigenous languages and cultural contexts? What sociotechnical frameworks are needed? How can design practices and policy development be pursued together? And how can HCI bridge the space between technical feasibility, governance, and lived cultural practices?

The outcomes of the workshop will include: A collaboratively produced map of the current landscape of AI research, deployment, and usage in low-resource contexts; An open invitation to participants to co-author a synthesis paper that captures insights from the workshop, identifies research gaps, and proposes directions for culturally grounded AI and HCI research; and Continued engagement through the workshop website and mailing list enables participants to build long-term collaborations and extend discussions beyond CHI.

Interactive Sessions for Collaborative Futures

The workshop will be highly interactive, combining presentations, collaborative mapping, and collective reflection. Besides selected position contributions and a short keynote, there will be two collaborative and generative activities that include: a visual mapping activity in the style of a 'World Café'; and a thematic synthesis of discussions in the style of affinity clustering.

The 'World Café' format combines structured small-group interaction with visual synthesis. It allows participants from diverse backgrounds to contribute knowledge without being dominated by a few voices. Additionally, the visual maps help externalize the breadth of experiences and contextual insights on AI in low-resource settings. The activity should generate a collective 'map' of AI use, barriers, and innovations in low-resource contexts, which will be digitized and shared as a living document post-workshop.

The affinity clustering helps transform a large volume of raw discussions into digestible, thematic insights in a participatory way, rather than relying solely on facilitators. It results in a set of co-constructed thematic clusters capturing shared challenges and emergent opportunities, which can be the foundation for a workshop report, paper, or collaborative position statement.

Our Interdisciplinary Team

Alfred Malengo Kondoro: Graduate researcher at Hanyang University, focusing on HCI, AI, and NLP. Contributes to multilingual dataset creation for African languages and examines AI adoption in resource-constrained settings, advocating for community-led, culturally grounded AI approaches.

Jaydon Farao: PhD researcher in Computer Science at the University of Cape Town. Research focuses on human-computer interaction, digital storytelling, and participatory design to support community well-being in under-resourced contexts.

Cynthia Jayne Amol: PhD student at Maseno University and African NLP researcher. Co-founder and lead data validator for Tonative, an African AI community, she focuses on community-led extension of African datasets and organizes collaborative workshops.

Kato Steven Mubiru: AI and Autonomous Systems Architect and CoFounder of Crane AI Labs. Specializes in sovereign Digital Public Infrastructure for the Global South, co-leads applied research into offline-first, culturally grounded AI systems, and co-develops African AI safety frameworks like the UCCB.

Bronson Bakunga: Machine Learning Engineer and NLP researcher pursuing an MSc. Research focuses on fine-tuning large language models for low-resource African languages, developing code-switching-aware translation systems, and creating synthetic datasets. Contributes to Cohere Labs, aiming to democratize accessible AI in African contexts.

Gilbert Kiplangat Korir: Graduate student at WorldQuant University with interests in explainable and responsible AI for underrepresented contexts. Works with MsingiAI, a research lab focused on democratizing AI access in Africa through open-source initiatives.

Chris Chinenye Emezue: Researcher and entrepreneur, founder of Lanfrica, a platform addressing the discoverability of African language resources. Research spans NLP, causality, and reinforcement learning, with applications in adaptive intelligent systems. Contributes to Nigeria's National Artificial Intelligence Policy.

Workshop Snapshot

20-50 Expected Participants for an Interactive CHI 2026 Session

Engagement Duration

3 Hours of Focused, Collaborative Co-Creation

Traditional vs. Culturally Grounded AI Development

Aspect Traditional AI Approaches AI Across Cultures Workshop Approach
Core Assumption
  • AI as a universal solution, drawing from high-resource contexts.
  • AI as a sociotechnical system evolving with cultural & linguistic contexts.
Community Representation
  • Low-resource languages and diverse communities underrepresented; dominant cultural assumptions.
  • Cultural diversity & low-resource contexts as essential starting points; community-led co-creation.
Focus & Outcomes
  • Driven by technical progress; uneven benefits and risks.
  • Fostering pluralistic, equitable, sustainable AI ecosystems; collaborative mappings & actionable guidelines.

Collaborative World Café & Visual Mapping Process

Divide participants into small groups (4-6 people) and assign thematic tables.
Groups rotate across tables, adding examples, challenges, and opportunities.
Each table selects a volunteer to summarize contributions; participants refine connections.
Brief plenary reflection identifies key tensions and gaps.

Advancing African Language AI

The 'AI Across Cultures' workshop aligns with pioneering efforts to build robust AI ecosystems for African languages. Initiatives like Lanfrica, a platform founded by organizer Chris Chinenye Emezue, address the discoverability problem for African language resources. Other organizers contribute to multilingual dataset creation for Swahili and other African languages (Alfred Kondoro), code-switching-aware translation systems (Bronson Bakunga), and the development of African AI safety frameworks like the UCCB (Kato Steven Mubiru). These projects exemplify the workshop's commitment to community-led, culturally grounded, and equitable AI development, ensuring technology serves diverse linguistic and societal needs.

Illustration of African AI initiatives

Calculate Your Potential ROI

Estimate the tangible benefits of integrating culturally-grounded AI strategies into your enterprise.

Projected Annual Impact

Potential Cost Savings $0
Employee Hours Reclaimed 0

Your Path to Culturally Grounded AI

A structured approach to integrating AI that respects and leverages cultural diversity.

Phase 1: Discovery & Cultural Immersion

Initial consultations and workshops to understand your organization's unique cultural context, linguistic landscape, and specific business needs. Identify key stakeholders and underrepresented communities.

Phase 2: Co-Design & Prototyping

Collaborative design sessions with community leaders, domain experts, and engineers. Develop initial AI prototypes, focusing on data sourcing, model training, and interaction design that respects local practices and languages.

Phase 3: Ethical Development & Iteration

Implement robust ethical AI governance frameworks. Conduct iterative testing and refinement with user groups, ensuring models are fair, transparent, and perform effectively in diverse cultural settings. Address biases and ensure data sovereignty.

Phase 4: Deployment & Sustained Impact

Roll out AI solutions with continuous monitoring and support. Establish feedback loops for ongoing cultural adaptation and model improvement. Provide capacity-building for local teams to ensure long-term sustainability and empowerment.

Ready to Transform Your Enterprise with Inclusive AI?

Book a complimentary strategy session with our experts to explore how culturally grounded AI can drive innovation and foster equitable growth in your organization.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking