Enterprise AI Analysis
A Roadmap for Governing AI: Technology Governance and Power Sharing Liberalism
Authored by Danielle Allen, Sarah Hubbard, Woojin Lim, Allison Stanger, Shlomit Wagman, and Kinney Zalesne. Published January 2024 (Preprint version).
Executive Impact Summary
This paper provides a roadmap for AI governance, advocating for a proactive, expansive vision that prioritizes human flourishing, democratic stability, and economic empowerment. It moves beyond reactive risk management to suggest broad investments in public goods, personnel, and democracy itself.
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
What is a Foundation Model?
A foundation model refers to a large neural network which is notably adaptable and capable of a wide variety of tasks (e.g., OpenAI's GPT-3 and GPT-4).
Understanding Open-Source AI
An open-source model allows anyone to publicly access, modify, and update the source code of the system. In contrast, some models are closed-source, restricting access to underlying source code to specific companies or developers.
Defining AI Alignment
Alignment generally refers to how a model is calibrated to a set of guiding rules or principles in order to steer towards intended output. This process may be done through fine-tuning and other methods.
AI Capabilities and Use Cases
Capabilities: This term encapsulates the functions that AI systems are able to perform (e.g., classifying data, grouping data, making predictions). Use Cases: AI systems are put to use in specific applications, such as supporting algorithmic prediction by insurance firms or operating autonomous vehicles.
Interaction and Systemic Risk
Interaction risks arise from how AI capabilities interact with other biological and social systems (e.g., algorithmic prediction in social media driving negative mental health outcomes). Systemic harm and benefit consider impacts on groups, societies, or even humanity, not just individuals.
Enterprise Process Flow: Model Development Lifecycle
| Framework | Guiding Principle | Risk Focus | Key Characteristic |
|---|---|---|---|
| China | Socialist core values, national security | Content generation, monitoring, control of speech | Ensures responsible growth, standardizes application, safeguards national security and public interests. |
| European Union | Liberal, rights-protective | Risk-based tiering (unacceptable, high, minimal) | Emphasizes individual rights, complements GDPR, focuses on regulating use cases. |
| Japan | Human-centric AI ("Society 5.0") | Bolstering principles through AI's positive impact | Focuses on the upsides of AI; no direct restrictions on AI use. |
| United Kingdom | Precedent and common law evolution | Discrimination, product safety, consumer law | Technology-neutral legislation, relies on existing regulators. |
| United States | Liberal, rights-protective, national security, economic competitiveness, equity | Safety, innovation, competition, rights, marginalized populations | Primarily uses existing agency structure with coordinating vehicles. |
Proposed Normative Framework: The Power-Sharing Liberalism Approach
The paper introduces an alternative normative framework based in power-sharing liberalism, aiming to guide AI governance towards human flourishing through four core propositions:
- Proposition 1: Technology, properly conceived, ought to advance human flourishing.
- Proposition 2: Human flourishing requires individual autonomy.
- Proposition 3: Autonomy requires the values of democratic governance.
- Proposition 4: Autonomy requires the material bases of empowerment.
This framework expands beyond narrow risk management to encompass broader societal impacts, proactive vision, and investment in public goods and democratic capacity.
Advanced ROI Calculator: Quantify Your AI Impact
Estimate the potential return on investment for integrating AI solutions into your enterprise by adjusting key variables. This model adapts findings from "A Roadmap for Governing AI" to simulate real-world benefits.
Your Enterprise AI Implementation Roadmap
Based on the governance tasks outlined in "A Roadmap for Governing AI," here's a structured approach to integrating AI responsibly and effectively into your organization.
Blocking and Mitigating Harms
Establish robust frameworks to identify and prevent harms from AI, focusing on individual negative liberties, inspired by the EU AI Act's use-case approach. Ensure existing agencies are equipped for this.
Seeing and Mastering Emergent Capabilities
Develop capacity to track and manage novel AI capabilities. Implement licensing for frontier AI labs and invest in public sector compute for independent evaluation and red-teaming against catastrophic risks.
Blocking Bad Actors
Implement transparency and auditability measures in AI systems to prevent misuse for deep fakes, cyberattacks, and financial crimes. Foster international cooperation and security protocols to protect sensitive information.
Steering Toward Public Goods
Proactively direct AI development towards public goods. Invest in R&D for solutions in public health, social well-being, climate sustainability, democratic stability, and economic integration and innovation.
Building Human Capital Strategy
Strengthen the talent pipeline for public sector AI roles, including engineers, scientists, and ethicists. Advocate for national service expectations for STEM graduates and integrate ethics training into AI development teams.
Investing in Democratic Steering Capacity
Implement pro-democracy reforms (e.g., ranked-choice voting) and invest in civic education and digital civic infrastructure to ensure public institutions remain accountable and resilient to technological changes.
Ready to Transform Your Enterprise with AI?
Book a personalized consultation to discuss how "Power-Sharing Liberalism" can guide your AI strategy, mitigate risks, and unlock new opportunities for human flourishing within your organization.