Fontys ICT AI Initiative
Institutional AI Sovereignty Through Gateway Architecture: Implementation Report from Fontys ICT
This report details the design and operational insights from a six-month pilot of an institutional AI platform, demonstrating how a university can offer advanced AI capabilities on its own terms, aligning with European law and pedagogical objectives.
Executive Impact
To address fragmented, high-risk, and inequitable adoption of commercial AI tools, Fontys ICT designed and operated an institutional AI platform in a six-month, 300-user pilot. The aim was to demonstrate that a university of applied sciences can offer advanced AI capabilities on its own terms, with fair access for all students and staff, transparent risks, controlled costs, and full alignment with European law.
Commercial AI subscriptions proved incompatible, creating unequal access and serious compliance risks. Our solution, a governed gateway platform with three tightly integrated layers (Frontend, Gateway, Provider), enabled the institution to steer traffic to EU infrastructure by default, manage usage, and ensure transparent model choices. The pilot confirmed technical and organizational feasibility, highlighting the critical need for dedicated AI governance.
Case Study: The Fontys ICT Context
Fontys ICT, a university of applied sciences in the Netherlands, faced the challenge of managing the rapid adoption of generative AI. Students and faculty were using a patchwork of tools (ChatGPT, Claude, Gemini), with only Microsoft Copilot being institutionally licensed. This led to issues of financial inequity (premium subscriptions favoring paying students), technical risks (fragmented contracts, unclear data processing), and a critical question: "if we cannot trust the vendor default, who decides which AI tools we do trust, on what criteria, and through what process?"
The institution identified key requirements commercial subscriptions failed to meet: educational equity, budget management, GDPR/AI Act alignment, pedagogical control, and research flexibility. This context underscored the need for a sovereign, accountable, and publicly compatible AI infrastructure.
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
The Rationale for Institutional AI Sovereignty
The pilot demonstrated the fundamental inadequacy of commercial AI subscription models for educational institutions. Our requirements for AI infrastructure diverged significantly from commercial offerings, necessitating an institutional solution.
Key Motivations:
- Educational Equity: Access cannot depend on personal budgets. Premium subscriptions gave paying students obvious advantages in coursework and experimentation while free-tier limits held others back.
- Budget Management and Cost Control: Individual commercial subscriptions led to fragmented spending, lack of ROI tracking, and administrative overhead. Scaling such subscriptions institution-wide was economically unsustainable.
- GDPR and AI Act Alignment: Vendors' scattered documentation made institutional-scale compliance impossible. Deployers need clear, consolidated documentation of capabilities, limitations, and oversight.
- Pedagogical Control: ICT students need programmable infrastructure API access, cost-performance analysis, fallback strategies, and exposure to multiple architectures. Consumer chat interfaces hide crucial complexity.
- Research Flexibility: Professorships require tailored model combinations, budget profiles, and APIs for existing development workflows, which consumer products lack.
| Approach | Equity of Access | Data Sovereignty & Compliance | Pedagogical Flexibility | Feasibility |
|---|---|---|---|---|
| Commercial Licensing | Medium | Low | Low | High |
| Consortium Procurement (e.g., SURF) | Medium-High | Medium | Low-Medium | High |
| Self-Hosting | High | High | High | Low |
| Gateway Architecture (Selected) | High | High | High | Medium-High |
The gateway approach provided the workable balance needed: equitable, transparent access with enforceable controls, and the agility to respond to changing provider risk assessments and sector guidance.
Architecture & Implementation Details
The system was designed with a clean separation of concerns across three distinct layers to meet institutional requirements while enabling independent evolution and multi-provider integration.
Enterprise Process Flow: Three-Layer Gateway Architecture
Layer Descriptions:
- Frontend Layer (OpenWebUI): Handles user interaction, identity integration (Azure AD for SSO and group-based access), explicit model selection with visible geographic, cost, and capability indicators, and model card links.
- Gateway Layer (Portkey): The operational core. Translates institutional policy into technical enforcement, controlling access, budgets, routing (EU-first by default), logging usage, and abstracting provider details.
- External Provider Layer: Supplies the model catalog from vetted commercial (Azure AI Foundry, Anthropic, OpenRouter) and self-hosted open-source models (GreenPT). Ensures contractual controls over data handling and geography.
Model Cards as Governance Interface: Transformed from technical documentation to active governance instruments, serving for systematic risk evaluation, transparent documentation, and as a shared reference framework for institutional decision-making and user informed consent. They consolidate vendor documentation and map to ethical requirements.
The Governance Challenge & Future Direction
The pilot surfaced critical governance questions that technology alone cannot solve: who decides which models are offered, on what grounds, for whom, and with what safeguards? This led to the conclusion that AI is not merely a support function, but strategy itself, demanding dedicated leadership.
Key Governance Challenges Revealed:
- Model Lifecycle Management: Informal, engineer-led choices worked for a small cohort but lacked documented criteria, authority, and transparency needed for scale.
- Geographic Hosting Decisions: EU-first routing was enforced, but exceptions for US-only models (e.g., Anthropic's Claude) highlighted the need for documented, informed consent and clear approval workflows.
- Access Authorization: Manual review of API access requests worked but lacked consistent thresholds, audit trails, and clear policy for various risk and cost levels.
- Budget Allocation: Technical caps controlled spending, but lacked principles for prioritizing resources between universal access and specialized research, requiring evidence-based differentiation.
- Compliance Monitoring: Reactive, informal tracking of provider policy changes proved fragile. Interdisciplinary ownership (technical, legal, privacy) is crucial for ongoing GDPR and AI Act alignment.
- Stakeholder Communication: Transparency about AI governance (why decisions are made, trade-offs, safeguards) requires tailored context and formats for students, faculty, leadership, and privacy officers.
These challenges underscore the need for a dedicated, institutional function: the AI Officer.
Calculate Your Institutional AI Impact
See how a governed AI gateway can reclaim hours and save costs for your organization, based on industry averages and our pilot data.
Our Recommended AI Governance Roadmap
Based on the Fontys ICT pilot, we outline a structured approach for establishing institutional AI sovereignty and scalable governance.
01. Establish AI Governance Framework
Define clear evaluation criteria for models (capability, compliance, cost, pedagogy). Designate a decision authority (AI Governance Committee or AI Officer) for model additions/removals and conduct scheduled reviews. Develop Model Cards as primary governance and transparency instruments.
02. Implement Policy Enforcement & Access
Develop a documented framework for geographic hosting decisions, including data sensitivity tiers, user awareness thresholds, and approval workflows for international data transfers. Standardize request forms and evaluation criteria for API access, integrating Model Cards for informed consent.
03. Optimize Resource Allocation
Define clear principles for distributing AI infrastructure funding, including baseline access, research allocations, and administrative budgets. Utilize usage data from dashboards and Model Cards for cost transparency, guiding resource rebalancing and fostering cost-conscious use.
04. Ensure Continuous Compliance
Move from reactive to structured oversight for provider policy changes. Implement revalidation and reissuance of Model Cards upon updates to link technical configuration to legal accountability. Establish clear incident response paths and escalation procedures for security breaches or user-reported harms.
05. Foster Transparent Communication
Develop role-specific communication channels and formats (student-facing docs, faculty workshops, leadership briefings, privacy coordination). Involve stakeholders in policy development. Provide regular updates on platform status, usage, governance, and changes, using institutional artifacts for oversight.
06. Define Dedicated AI Leadership
Establish a formal AI Officer role or coordinated team reporting to institutional leadership. This role bridges technical understanding, governance authority, and educational responsibility, ensuring AI remains robust, compliant, sustainable, and pedagogically aligned across the enterprise.
Ready to Reclaim Your AI Agency?
Don't outsource your institutional values. Implement a sovereign AI gateway architecture that aligns with your educational mission, compliance needs, and budget. Our experts are ready to guide you.