Skip to main content
Enterprise AI Analysis: Generative AI in Developing Countries: Adoption Dynamics in Vietnamese Local Government

Public Administration & Policy

Generative AI in Developing Countries: Adoption Dynamics in Vietnamese Local Government

This analysis explores the organizational factors shaping GenAI adoption in Vietnamese local government, revealing insights into the challenges and opportunities for public-sector innovation in emerging economies.

Executive Impact Snapshot

Key findings and insights at a glance for immediate strategic understanding.

Productivity Gain
Semi-Structured Interviews
AI Accountability Vacuum

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Technological Advantage
Rational Resistance
AI Accountability Vacuum

Technological Advantage

Civil servants described substantial efficiency gains, particularly in drafting documents, summarizing information, and managing routine tasks. These perceived benefits align with the well-established role of performance expectancy in technology adoption models such as TAM and UTAUT. The conversational interface also reinforces ease of use, reflecting a technologically accessible design that lowers initial resistance. However, this study reveals that the “ease of use" associated with GenAI is deceptively simple. While users can instantly interact with the tool, effective adoption requires a deeper skill set—particularly prompt engineering, domain-sensitive judgment, and the ability to detect hallucinations. This finding echoes [69], who argue that GenAI shifts cognitive labor rather than eliminating it. Our data illustrates a new form of complexity: GenAI appears simple but conceals advanced competencies that users must develop to avoid errors, misinformation, or data leakage. This dynamic extends existing literature by highlighting the difference between approachability (low barrier to initial use) and operational mastery (high barrier to safe, effective use). The findings suggest that GenAI introduces a layered learning curve that traditional AI models did not require. This hidden complexity is particularly problematic in resource-constrained public organizations, where training, governance, and expert support are limited.

Rational Resistance

Traditional literature often frames resistance to new technologies as attitudinal, driven by fear, age, or technophobia [71]. In contrast, our findings point to what can be described as rational resistance. Employees are not resisting GenAI because they dislike it or fail to understand its potential; rather, they resist because organizational structures expose them to disproportionate risk. Leaders offer rhetorical support for GenAI but do not provide the necessary enabling conditions such as implementation plans, budgets, workflow integration, or advanced training. These would allow employees to use GenAI safely. As a result, civil servants navigate a profound misalignment between symbolic support and practical support. Younger staff, more comfortable experimenting with new tools, tend to adopt GenAI informally, while older and more experienced staff—who understand bureaucratic accountability more deeply—rightly perceive GenAI as a high-risk and low-support activity. Their reluctance is therefore not emotional resistance but a logical response to institutional failures. This finding extends TOE-based studies by emphasizing the need to differentiate between cultural support (leadership encouraging innovation) and structural support (resources, plans, and protections). Our findings suggest that without structural support, cultural encouragement may actually intensify risk, pushing employees into unofficial and unsupported experimentation.

AI Accountability Vacuum

The environmental dimension of the TOE framework is typically described as external pressures, regulatory conditions, and institutional norms. It also requires rethinking in light of GenAI. Rather than existing as a stable or well-defined set of rules, the “environment" in this context is characterized by an AI accountability vacuum. This vacuum consists of three interlocking uncertainties: security risks, regulatory void, and accountability ambiguity. The security risks stem from the probabilistic, data-absorbing nature of GenAI tools, which raises concerns about national secrets, sensitive citizen information, and internal administrative data. The regulatory void—where no explicit guidance exists on what can be used, shared, or generated-amplifies this uncertainty. The most paralyzing component, however, is accountability ambiguity. Employees fear that if GenAI produces errors, leaks sensitive content, or generates misleading recommendations, responsibility will fall entirely on them. This fear is reasonable in bureaucratic systems where compliance is strictly monitored, and penalties can be severe. The accountability vacuum therefore creates not only hesitation but also causes institutional paralysis, which is present in the halt of policy implementation and locating budget or investment in research and development due to the ambiguity of existing legal frameworks, overlapping jurisdiction authority of the departments. It comes from the lack of regulations regarding the use or the complexity of the financial proceedings to pay for the account subscription and risk aversion culture regarding unproven technologies. As a result, employees may use GenAI privately but avoid any formal application where risk can be traced. This extends the TOE framework by illustrating that environmental factors are not limited to rules or pressures; they can also manifest as the absence of governance. Such a vacuum is not neutral—it produces unpredictable risks that discourage formal adoption and push innovation into shadow adoption.

Shadow Adoption Unofficial GenAI use due to lack of formal integration and security risks.

Enterprise Process Flow

Unofficial & Voluntary Use
Institutional Capacity Constraints
AI Accountability Vacuum
Governance Paralysis

GenAI vs. Traditional AI: Key Differences for Public Sector

Feature Generative AI Traditional AI
Output Nature
  • Probabilistic, creative, context-dependent
  • Deterministic, rule-based, structured
Data Dependence
  • Massive, heterogeneous, un-curated datasets
  • Structured, clear provenance, limited scope
Governance Challenge
  • Accountability ambiguity, data leakage, explainability, bias
  • Transparency, audit, compliance, predictable errors
Skill Requirements
  • Prompt engineering, critical evaluation, ethical judgment
  • Technical literacy, process understanding

Case Study: Binh Duong Province, Vietnam

Binh Duong Province, a leader in digital transformation and smart city innovation in Vietnam, serves as a critical case. Despite strong governmental commitment and high digital readiness, GenAI adoption faces significant governance, capacity, and accountability barriers. This suggests that these constraints are structural, not merely transitional, and are likely more salient in less-resourced local governments. The province's experience highlights how a leading region can still be caught in a state of governance paralysis regarding formal GenAI adoption, relying heavily on informal, individual experimentation.

  • High digital readiness
  • Leading in smart city innovation
  • Yet, faces governance paralysis
  • Informal GenAI use predominates

Advanced ROI Calculator

Estimate the potential efficiency gains and cost savings for your organization with GenAI implementation.

Annual Savings $0
Hours Reclaimed Annually 0

GenAI Adoption Roadmap

A phased approach to safely and effectively integrate Generative AI into your public sector operations.

Phase 1: Risk Containment & Provisional Guidance

Issue interim usage guidelines for low-risk GenAI applications, formally legitimizing certain uses while discouraging others to reduce uncertainty and personal liability.

Phase 2: Organizational Embedding & Capacity Building

Allocate dedicated budgets for secure GenAI tools, integrate GenAI into existing workflows, and establish internal review protocols. Develop role-specific training modules for prompt literacy and verification skills.

Phase 3: Formalized Accountability & Governance Structures

Clarify responsibility for AI-assisted outputs, define accountability-sharing mechanisms, and embed GenAI oversight within existing audit and administrative law frameworks. Design collective organizational accountability.

Ready to Transform Your Public Sector?

Overcome governance paralysis and build a robust GenAI strategy. Schedule a consultation with our experts to navigate the complexities of AI adoption in resource-constrained environments.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking