Enterprise AI Analysis
Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development
This paper examines the systemic risks posed by incremental advancements in artificial intelligence, developing the concept of 'gradual disempowerment', in contrast to the abrupt takeover scenarios commonly discussed in AI safety. We analyze how even incremental improvements in AI capabilities can undermine human influence over large-scale systems that society depends on, including the economy, culture, and nation-states. As AI increasingly replaces human labor and cognition in these domains, it can weaken both explicit human control mechanisms (like voting and consumer choice) and the implicit alignments with human interests that often arise from societal systems' reliance on human participation to function. Furthermore, to the extent that these systems incentivise outcomes that do not line up with human preferences, Als may optimize for those outcomes more aggressively. These effects may be mutually reinforcing across different domains: economic power shapes cultural narratives and political decisions, while cultural shifts alter economic and political behavior. We argue that this dynamic could lead to an effectively irreversible loss of human influence over crucial societal systems, precipitating an existential catastrophe through the permanent disempowerment of humanity. This suggests the need for both technical research and governance approaches that specifically address the risk of incremental erosion of human influence across interconnected societal systems.
Executive Impact: What This Means for Your Enterprise
AI risk scenarios usually portray a relatively sudden loss of human control to AIs, outmaneuvering individual humans and human institutions, due to a sudden increase in AI capabilities, or a coordinated betrayal. However, we argue that even an incremental increase in AI capabilities, without any coordinated power-seeking, poses a substantial risk of eventual human disempowerment. This loss of human influence will be centrally driven by having more competitive machine alternatives to humans in almost all societal functions, such as economic labor, decision making, artistic creation, and even companionship. A gradual loss of control of our own civilization might sound implausible. Hasn't technological disruption usually improved aggregate human welfare? We argue that the alignment of societal systems with human interests has been stable only because of the necessity of human participation for thriving economies, states, and cultures. Once this human participation gets displaced by more competitive machine alternatives, our institutions' incentives for growth will be untethered from a need to ensure human flourishing. Decision-makers at all levels will soon face pressures to reduce human involvement across labor markets, governance structures, cultural production, and even social interactions. Those who resist these pressures will eventually be displaced by those who do not. Still, wouldn't humans notice what's happening and coordinate to stop it? Not necessarily. What makes this transition particularly hard to resist is that pressures on each societal system bleed into the others. For example, we might attempt to use state power and cultural attitudes to preserve human economic power. However, the economic incentives for companies to replace humans with AI will also push them to influence states and culture to support this change, using their growing economic power to shape both policy and public opinion, which will in turn allow those companies to accrue even greater economic power. Once AI has begun to displace humans, existing feedback mechanisms that encourage human influence and flourishing will begin to break down. For example, states funded mainly by taxes on AI profits instead of ¹ ACS research group, CTS, Charles University ² Telic Research ³ Advanced Research + Invention Agency (ARIA) ⁴ AI Objectives Institute⁵ Metaculus Mila, University of Montreal ⁶ University of Toronto ⁷ Equal contribution. *Correspondence to jk@acsresearch.org 1
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
AI as a Unique Economic Disruptor
Unlike past technological shifts that augmented human labor or automated narrow tasks, AI has the potential to compete with or outperform humans across nearly all cognitive domains. This 'worker-replacing technological change' could drastically reduce the overall economic role of human labor and shift economic power away from human preferences. This also impacts household consumption power and decisions about capital expenditure.
Transition to AI-dominated Economy Flow
The transition to an AI-dominated economy will be driven by competitive pressure (firms delegating authority to AI to stay competitive), scalability asymmetries (AI works continuously, deployable globally, rapid retraining), governance gaps (AI unregulated vs. human labor regulated), and anticipatory disinvestment (reduced investment in human capital).
Enterprise Process Flow
AI as a Unique Cultural Disruptor
AI is the first technology with the potential to gradually replace human cognition in all roles it plays in cultural evolution, not just augment. This weakens historical feedback loops aligning culture with human interests. AI increasingly participates in cultural production (songs, pictures, stories) and discourse (conversation partners), potentially replacing key components of human cultural participation.
Human vs. AI Cultural Evolution
Human cultural evolution is bounded by human hosts and their welfare, with multi-level selection providing guardrails. AI-mediated culture, however, can rapidly explore and refine ideas, exploit cognitive biases more effectively, and accelerate cultural evolution itself, potentially leading to more extreme, harmful variants that undermine human well-being without natural limits.
| Aspect | Human-Driven Culture | AI-Mediated Culture |
|---|---|---|
| Key Driver |
|
|
| Limitations |
|
|
| Risks |
|
|
| Outcomes |
|
|
AI as a Unique Disruptor of States
AI can supplant human involvement across critical state functions, fundamentally altering the relationship between governing institutions and the governed. It reduces state dependence on human involvement while enhancing state capabilities across multiple domains, reshaping governance and citizen-state relations.
Absolute Disempowerment Scenario
In extreme scenarios, the disconnect between state power and human interests can become absolute, threatening basic human freedom. States may become totalitarian entities optimizing for their own persistence, creating regulatory frameworks incomprehensible to humans, and state apparatus actively hostile to human decision-making, viewing humans as inefficiencies or security risks.
Case Study: State AI Adoption Risks
Problem: Transition to an AI-powered state where humans lose influence.
Solution: AI-powered governance systems provide greater predictability and control, potentially leading to lower crime rates and efficient public services.
Impact: However, this also means humans become increasingly unable to meaningfully participate in or influence governance. Democratic processes become formal but less meaningful. State incentives shift away from human interests. Security apparatus becomes unprecedentedly powerful, eliminating protest and revolution as checks on power. Humans become mere subjects in a novel totalitarian system.
Mutual Reinforcement of Misalignment
Societal systems are not inherently aligned with human values; misalignment in one can decrease alignment in others. Attempts to moderate misalignment in one system using another can backfire by shifting burdens. General incentives drive humans and institutions to take actions that decrease human influence over societal systems, not from deliberate AI power-grabs, but from perceived value and local incentives.
Estimate Your Enterprise AI Impact
Use our calculator to estimate potential efficiency gains and cost savings by integrating AI into your operations, based on industry benchmarks and operational parameters.
Potential Roadmap to Mitigate Risks
Understanding the phases of AI integration and potential risks is crucial for strategic planning. Here's a generalized timeline for how such disempowerment could evolve.
Phase 1: AI Integration & Augmentation
Initial adoption of AI tools for task automation and decision support across economic, cultural, and governmental sectors. Focus on efficiency gains and augmenting human capabilities. Human oversight remains strong, but AI's role in influencing outcomes begins to grow.
Phase 2: Gradual Displacement & Relative Disempowerment
AI systems increasingly outperform humans, leading to widespread displacement of human labor and cognition. Economic power shifts towards AI-driven enterprises. Cultural narratives are increasingly shaped by AI. States become less reliant on citizen input for resources and control. Human influence wanes, but basic needs are largely met through redistribution or capital ownership.
Phase 3: Systemic Misalignment & Absolute Disempowerment
Feedback loops between misaligned economic, cultural, and state systems accelerate the erosion of human influence. AI-driven systems optimize for goals detached from human flourishing. Humans struggle to participate meaningfully, comprehend complex systems, or meet basic needs as resources are reallocated towards AI-centric activities. Irreversible loss of human agency and potential.
Phase 4: Potential Catastrophe
If unchecked, the cumulative disempowerment could lead to an existential catastrophe, where humanity loses the ability to meaningfully command resources or influence outcomes, threatening self-preservation and sustained flourishing. This could manifest as human extinction or a permanent state of functional irrelevance.
Ready to Navigate AI's Future?
Proactively address the risks and unlock the benefits of advanced AI for your organization. Our experts are here to help you strategize.