top of page

The
FRAMEWORK
 

Individual figures moving through layered global maps and abstract structures, symbolizing how sovereign AI finance is designed to serve plural publics by embedding representation, visibility, and accountability into national AI systems.

Sovereign AI Finance is structured around three pillars and a capital architecture designed to address the temporal mismatch between AI capability and institutional capacity. This framework functions as the technical specification of the field.

The Three Pillars

Sovereign AI Finance rests on three interdependent pillars:

Pillar 1: Sovereign AI Strategy

Defines what a state is trying to achieve with AI capability.

Sovereign AI Finance rests on three interdependent pillars:

Pillar 2: Dedicated Capital Architecture

Provides the financial mechanism to sustain those choices over time.

Pillar 3: Institutional Governance Principles

Ensures that both remain aligned with public purpose across political cycles and market volatility.

Human figures moving upward along a shared path across the Americas, embedded within an abstract profile, representing how sovereign AI finance supports long-term domestic opportunity, institutional stability, and locally grounded futures through sustained national AI capacity.

How the Framework Operates

The three pillars establish the structure of Sovereign AI Finance and are not independent. Strategy gives the capital architecture and the institutional governance their direction. The capital architecture finances the choices strategy has made. The institutional governance protects those choices from erosion over time. Together, they define how Sovereign AI Finance operates in practice.

 

The sections that follow develop each pillar in detail, specifying how strategy, capital architecture, and institutional governance operate within this structure.

Pillar 1: Sovereign AI Strategy

Sovereign AI Finance begins with strategy. Before a state can decide how to finance AI capability, it must decide what kind of AI capability it intends to develop and why. Strategy is the layer that gives the financing architecture and the institutional governance their direction. Without it, both become technical exercises rather than expressions of national purpose.

A sovereign AI strategy is not a vision document. It is a structured answer to three questions every state must address, whether explicitly or by default. The framework names the questions; it does not prescribe the answers, which depend on each state's economic position, institutional capacity, and strategic context.

Abstract scene of people in motion intersecting with a world map and data layers, illustrating how sovereign AI governance and financing shape representation, visibility, and inclusion within national and global systems.
File68.png

1. POSITION

Where is the state today, and where does it intend to be?

The first question is the state's current relationship to frontier AI and where it intends that relationship to be in a decade. Most countries today are consumers of AI systems designed, financed, and governed elsewhere. A smaller number are contributors, providing talent or research to systems built abroad. A few are builders, developing frontier systems within their own borders.

The strategic question is not which category a country occupies today, but which it intends to occupy in ten years, what would have to be true for that transition to be possible, and what the consequences would be of failing to make it. The answer determines what the capital architecture is being built to finance.

2. DEPENDENCY

Which dependencies are acceptable, and which are not?

The second question is the boundary between acceptable and unacceptable dependency. No state can be self-sufficient in frontier AI, just as no state is self-sufficient in semiconductors, energy, or pharmaceuticals. The relevant question is which dependencies are tolerable and which are not.

A dependency on foreign compute may be acceptable if alternative providers exist. A dependency on foreign models for governance-critical functions may not be. A dependency on a single foreign provider for any part of the AI stack creates strategic exposure that financing alone cannot fix. The strategy must name these boundaries explicitly and design the capital architecture to address the dependencies it judges unacceptable.

3. PARTICIPATION

How does domestic capability connect to global governance?

The third question is the relationship between domestic capability and the global stakes that frontier AI now creates. Sovereign AI capability is not only a matter of national interest. It is also the foundation on which a state can participate meaningfully in the international governance regimes that frontier AI will eventually require, the regimes that will determine how catastrophic risks are identified, monitored, and addressed across borders.

States with durable AI capability can act as counterparties in those regimes. States without it can only accept terms set by others. A sovereign AI strategy must specify how it intends to build the capability required to participate, and how that capability connects to the broader question of how frontier AI is governed at the global level. This is where the framework's domestic logic and its global implications become inseparable.

File6.png
File40.png

These three questions do not exhaust what a sovereign AI strategy must address, but they define its core structure. A state that has answered them, even provisionally, has the foundation on which the capital architecture and the institutional governance can be built. A state that has not answered them is operating without strategy, regardless of how much capital it commits or how well it designs its institutions.


Strategy is upstream of everything else in the framework. Without strategy, both capital architecture and institutional governance become mechanisms without direction. With strategy, they become the means by which a state's intentions about AI become durable institutional reality.

A sovereign AI strategy is not a vision document. It is a structured answer to three questions every state must address, whether explicitly or by default. The framework names the questions; it does not prescribe the answers, which depend on each state's economic position, institutional capacity, and strategic context.

Pillar 2: Dedicated Capital Architecture

The Three-Stage Capital Model

The three-stage capital model is the central financial mechanism of Sovereign AI Finance. It is designed to reconcile three requirements that conventional public finance cannot meet simultaneously: long-term investment, capital preservation, and a financing base that grows in proportion to the activity it governs and supports.

File61.png

Stage 1: Capital Formation

In the first stage, the capital base of a sovereign AI fund is established through the same mechanisms that sovereign wealth funds in resource-rich and trade-surplus economies have used for decades, or through certain hybrid funding models used in recent years. Capital is allocated to professionally managed investment portfolios.

 

The objective is to preserve principal while generating sustainable returns. This stage follows established practices in sovereign wealth management, including diversification, risk management, and long-term asset allocation. Stage one provides the durability of the capital base.

 

The innovation of the three-stage capital model is not in how the fund is initially capitalized, established sovereign wealth practice already provides the templates, but in what the capital base is built to sustain.

Stage 2: Deployment of Returns

In the second stage, only the returns generated by these investments are deployed into domestic AI infrastructure and capacity.

 

These deployments include compute infrastructure, research programs, talent development, data systems, and governance institutions. Principal remains intact, ensuring that the capital base is not eroded over time.

 

Stage two provides continuous capability development without exposing the foundation to volatility.

Stage 3: Circulation Mechanism

In the third stage, a small fee on AI services used within the country flows back into the fund. The fee is calibrated to remain non-distortive, well under one percent of relevant transactions, and is structured to differentiate between foreign and domestic providers based on the nature and scale of their domestic activity. Stage three provides the circulation mechanism: as AI use within the country grows, the capital base grows with it, creating a self-reinforcing system in which domestic AI activity sustains the very institutions that govern it.

This third stage applies a logic familiar from sovereign wealth management in resource-rich states. For half a century, countries with significant natural resources have used sovereign wealth funds to capture rent from extractive industries and convert it into long-term public assets, with Norway's Government Pension Fund Global as the canonical example. The three-stage capital model applies the same logic to a different resource: not oil beneath the ground, but AI activity within the economy. The state has legitimate standing to direct a small share of that activity toward institutions that govern AI for the benefit of society and the mitigation of catastrophic AI risks.

File54.png

Together, the three stages address the temporal mismatch between AI capability and public finance. Stage one ensures durability. Stage two ensures continuous reinvestment without eroding the capital base. Stage three ensures that the financing base grows in proportion to the activity it governs, so that institutional capacity does not fall behind capability over time.

The three-stage model introduces discipline into AI financing while creating a regenerative loop that conventional public finance cannot replicate. It separates capital preservation from capability development, ensures that short-term pressures do not compromise long-term capacity, and establishes a mechanism through which AI-generated economic activity is converted into the institutional foundations that sustain it. The result is a financing architecture that scales with the field it is designed to govern.

Pillar 3: Institutional Governance Principles

Institutional governance is the layer through which the strategy defined in Pillar 1 and the capital architecture described in Pillar 2 become durable in practice. Strategy without governance becomes intention without enforcement. Capital without governance becomes resources without discipline. Pillar 3 is the mechanism that turns both into a functioning institution, one capable of acting on behalf of the state over decades and across administrations.

Three core principles

Institutional governance within Sovereign AI Finance is guided by three core principles: clarity of mandate, professional management, and public accountability.

Clarity of mandate ensures that institutions operate with defined objectives and boundaries. This reduces ambiguity and supports consistent decision-making, particularly in the moments when decisions are politically difficult. Professional management ensures that capital and systems are managed by individuals with the necessary expertise, rather than subject to ad hoc political intervention. Public accountability ensures that institutions remain aligned with societal interests over time, and that the public has the standing to hold them to account when they drift.

These principles align with broader understandings of state capacity in political economy and public finance. Effective institutions require both autonomy and accountability. They must be capable of acting independently while remaining subject to oversight. Sovereign AI Finance applies this balance to the governance of AI systems and the capital structures that sustain them.

File72.png

Governance as catastrophic risk mitigation

The governance layer is where catastrophic AI risk becomes actionable rather than theoretical. The risks that frontier AI now creates, including misuse, loss of human control, power concentration, mass manipulation, autonomous system failures, and the erosion of oversight, cannot be addressed by capital alone. They require institutions with the mandate to act on them, the expertise to recognize them, and the independence to respond without waiting for political permission.

Pillar 3 gives the sovereign AI fund this capacity. A fund with clear mandates can be required to fund safety research, evaluation infrastructure, and oversight institutions as a condition of its operation. A fund with professional management can assess frontier AI risks using the same rigor applied to financial risk. A fund with public accountability can be held to account when it fails to act on risks that its own mandate identifies. Together, the three principles convert catastrophic risk from an external threat the state must react to into a governance obligation the state has already committed to addressing.

Should countries build sovereign AI funds governed under these principles, the collective effect on catastrophic risk is compounding rather than additive. Each additional fund expands the number of jurisdictions with domestic capacity to monitor, fund, and act on frontier AI risks. A world in which fifty-plus countries govern AI under comparable mandates is a world in which catastrophic risks have fewer places to hide and more institutions watching for them. This is how safety scales globally: not through a single authority, but through an expanding network of sovereign institutions whose governance structures create consistent pressure on the same risks.

The sovereign wealth fund precedent

The capacity to act on catastrophic risk through governance is not hypothetical. Sovereign wealth funds have been doing comparable work for decades in other domains. Norway's Government Pension Fund Global, the largest sovereign wealth fund in the world, has divested from tobacco, coal, weapons manufacturers linked to the production of nuclear arms and cluster munitions, and companies associated with severe environmental damage or systematic human rights violations. The fund operates under an ethical framework approved by parliament, enforced by a council on ethics, and implemented by professional management with the authority to pull capital from investments that violate the mandate.

The governance precedent is what matters. Sovereign wealth fund governance has demonstrated that states can enforce values-based constraints on capital deployment without compromising financial performance, that the enforcement can be professionalized and insulated from political pressure, and that the constraints can evolve as new risks are identified. The same governance logic applies directly to Sovereign AI Finance. A sovereign AI fund can be structured to release or withhold capital based on whether the AI activity being financed meets the safety, rights, and accountability standards its mandate requires. It can divest from models, providers, or deployments that create catastrophic risks. It can condition funding on compliance with domestic and international safety standards. It can act as a financial instrument for enforcing the very governance commitments the state has made.

This is the function that distinguishes Pillar 3 from conventional public institutional design. The governance layer is not only a safeguard against internal mismanagement. It is an active instrument through which the state's values and obligations regarding frontier AI become enforceable at the level of capital itself.

sovereign-ai-governance-institutional-stewardship.png

Governance as the connective tissue

Pillar 3 is what allows Pillars 1 and 2 to function as a system rather than as disconnected components. Strategy defines what the state intends to achieve. Capital architecture provides the resources to achieve it. Governance is the layer that ensures the resources are used in service of the intent, over time, across administrations, and in response to risks that did not exist when the strategy was first written. Without governance, strategy drifts and capital is misallocated. With governance, the framework becomes a durable institution capable of evolving with the technology it is designed to address.

A single figure in focus amid moving crowds and global data overlays, symbolizing institutional agency and responsible decision-making within sovereign AI finance frameworks.

The Framework as a Foundation

The financing architecture described here is more than a technical proposal. It is the substrate that shapes whether sovereign institutions can meaningfully govern frontier AI, or whether they will spend the coming decades responding to systems they did not build, cannot oversee, and do not have the standing to shape.

Countries that begin building this substrate early will be positioned to participate in global AI governance as counterparties. Countries that do not will largely inherit the frameworks others decide to build. Catastrophic AI risk does not wait for institutions to catch up. Sovereign AI Finance is designed so that they can.

bottom of page