The Assumption Is Breaking
A structural reckoning is underway in enterprise technology — one that most CFOs, boards, and CIOs have not yet fully priced into their infrastructure strategies.
The conventional wisdom held that cloud migration was a modernization decision. A technology choice. A question of agility versus control. That framing is now obsolete.
The AI arms race has fundamentally changed the economics of compute infrastructure. Hardware that costs billions to acquire is economically obsolete in three years — before it is even half-depreciated on a company’s books. Hyperscalers are spending $650 billion annually in a cycle that more closely resembles supermarket restocking than industrial investment. And the chip supply chain that powers this entire ecosystem is being cornered by entities with balance sheets no enterprise can match.
This is not a technology story. It is a capital markets story. And it has a direct parallel in recent financial history.
When the market learned that SaaS ARR wasn’t as safe as priced, multiples collapsed. The same reckoning is coming for cloud capex — from the other direction.
Six forcing functions are simultaneously ending the era of private enterprise hardware ownership:
Economic obsolescence — a 3-year economic asset life on hardware depreciated over 5–6 years
Supply foreclosure — hyperscalers and sovereign buyers cornering chip access at a scale no enterprise can match
Operational drag — the talent, energy, and management overhead of an accelerating hardware cycle
Architectural uncertainty — the dominant compute paradigm may itself be transitional
Accounting distortion — GAAP treatment masks true obsolescence velocity; a regulatory reckoning is coming
Geopolitical fracture — Taiwan concentration risk, U.S.-China tech war, and sovereign AI demand reshaping the entire supply chain
2026 estimate
vs. 5–6yr depreciation
delayed or canceled
vs. Terafab’s need
The conclusion is not that the cloud is safe. The conclusion is that the cloud is necessary — and that the enterprises who understand why will make fundamentally different strategic decisions than those who don’t.
The SaaS Parallel
When “Safe” Revenue Wasn’t
To understand what is happening to AI infrastructure investment today, it helps to remember what happened to SaaS valuations yesterday.
For most of the 2010s, SaaS companies commanded premium multiples on the basis of a simple premise: annual recurring revenue was safe revenue. Subscription contracts, high switching costs, net revenue retention above 100% — the narrative was that once a customer was in, they stayed in, and the revenue compounded reliably.
The market priced that narrative generously. Revenue multiples of 15x, 20x, even 30x became common for high-growth SaaS businesses. Investors were not just paying for current earnings — they were paying for the assumed quality and durability of the revenue stream.
Then the assumptions changed.
AI began commoditizing SaaS features at a rate the models had not accounted for. Point solutions that had been protected by workflow lock-in suddenly faced competition from general-purpose AI tools that could replicate core functionality without a subscription. Churn risk, previously theoretical, became measurable. The “recurring” in ARR turned out to be more conditional than the multiples had implied.
The market repriced. Not because the revenue disappeared — but because the quality of that revenue was lower than advertised. The safety assumption was wrong, and when the market recognized it, the correction was swift and significant.
The same structural logic now applies to cloud infrastructure capex — but from the other direction. It is not the revenue that is less safe than priced. It is the investment.
The hyperscalers built their cloud businesses on the premise that infrastructure investment was durable and compounding. You build data centers, fill them with servers, amortize the cost over time, and margins improve as utilization scales. The capex was treated as a long-lived, productive asset.
What Research Affiliates CEO Chris Brightman documented in April 2026 is that this premise is structurally false in the AI era. The capex is not compounding. It is churning. And the churn cycle is accelerating.
The parallel is exact. SaaS ARR appeared safe until the competitive dynamics that made it safe changed. Cloud capex appeared productive until the innovation cycle that defined its useful life accelerated beyond what the accounting models assumed. In both cases, the market priced an assumption. In both cases, the assumption broke.
The SaaS correction already happened. The capex reckoning is beginning now.
The Capex Reckoning
Six Forcing Functions Ending Private Hardware Ownership
The end of private enterprise hardware ownership is not a prediction. It is a convergence — six independent forcing functions arriving simultaneously, each sufficient on its own to shift the calculus, together creating a structural irreversibility.
1 — Economic Obsolescence
The Brightman analysis from Research Affiliates establishes the foundational data point: AI hardware has an economic life of approximately three years, while it is depreciated over five to six years on corporate income statements.
The proof is in the unit economics of Nvidia’s H100 GPUs. In year two of deployment, an H100 generated a return on investment of 137%. By year four, the same hardware was producing a negative ROI of 34%. The curve does not flatten — it accelerates downward.
This gap between accounting life and economic life has profound implications. Earnings are systematically overstated during the useful period of the asset. Write-downs are coming as hardware becomes economically obsolete before it clears the books.
2 — Supply Foreclosure
One number reframes the entire supply foreclosure argument. Elon Musk, announcing the Terafab project on March 21, 2026, stated that the combined output of every advanced semiconductor foundry currently operating on Earth represents approximately 2% of the compute his companies alone will need. The entire current global semiconductor industry — TSMC, Samsung, Intel, all of it — is 2% of one company’s projected demand.
The data center buildout has already run into the supply wall. Analysis presented on Moonshots EP #247 shows that of 12 gigawatts of planned US data center capacity, approximately 50% is delayed or canceled, 33% is under construction, and 17% remains uncertain. The cause is not chip scarcity alone — it is electrical equipment shortage from Chinese supply chains. Transformers, switchgear, and power infrastructure components. The constraint has moved upstream of the chips themselves.
ASML, headquartered in Eindhoven, Netherlands, holds a near-absolute monopoly on extreme ultraviolet lithography — the EUV machines required to manufacture semiconductors at advanced nodes. There is no alternative supplier. ASML produces approximately 50 EUV systems per year globally. Every advanced AI chip in every data center flows through that single bottleneck.
The chip ownership data makes the foreclosure concrete. Tracking cumulative AI chip ownership from 2024 Q1 through 2025 Q4, the global AI chip stock approaches 20 million units — with Google holding the dominant position across both TPUs and H100s. China holds the second-largest cumulative position despite Western export controls. Microsoft, Meta, Amazon, Oracle, and xAI follow. The enterprise market does not appear on this chart. It is not a rounding error. It is structurally absent from the ownership class that will define AI compute capacity for the next cycle.
3 — Architectural Uncertainty
TSMC is a process innovator, not an architecture innovator. Their manufacturing excellence is not in question. But process leadership is not architecture leadership — and the most consequential players in the AI economy are determining architecture themselves, divergently, in parallel.
Entire Mac lineup moved from Intel to M-series in under two years. Neural Engine embedded natively at every tier. Proof of concept for full vertical silicon integration — no Nvidia, no arms race participation.
Three custom silicon lines: Graviton (general compute), Trainium (training), Inferentia (inference). A complete architectural stack. Not hedging against Nvidia — systematically eliminating the dependency.
TPU v5 + Axion CPU + Trillium accelerator. Furthest along of any hyperscaler. Fully decoupled from merchant silicon across training, inference, and general compute. The TPU program predates the current AI boom by nearly a decade.
Telum integrates AI inference acceleration directly into the chip die, co-located with data rather than separated by GPU cluster latency. The Spyre accelerator extends this. AI at the data — not AI at the GPU farm.
Maia AI accelerator and Cobalt ARM CPU. Building the same custom silicon escape hatch quietly, while maintaining the public Nvidia/OpenAI partnership narrative.
A15/A16 for terrestrial inference. D3 radiation-hardened chip for orbital data centers. Terafab targeting 1 terawatt of annual output — 50x current global semiconductor production.
The pattern is unambiguous: every major player with the balance sheet to make the investment is actively routing around the Nvidia/TSMC-dependent GPU architecture. Any enterprise that has invested in private hardware ownership has locked capital into an architectural bet that the largest players have already decided they are not willing to make.
4 — Operational Drag
Managing competitive AI infrastructure requires continuous hardware evaluation, procurement, deployment, and retirement on an accelerating cycle. It requires energy infrastructure that scales with each generation. It requires a specialized talent base staying current with an architecture landscape changing faster than most organizations can hire.
For hyperscalers, this is their core business. For every other enterprise, it is overhead — distraction from the actual competitive work of building products, serving customers, and generating returns. Every dollar spent managing hardware is a dollar not spent building the actual moat.
5 — Accounting Distortion
AI hardware with a 3-year economic life is being depreciated over 5–6 years on corporate income statements. At $650 billion in annual AI capex, the earnings overstatement is not a rounding error. It is a structural feature of current financial reporting that investors, analysts, and boards are only beginning to scrutinize.
The FASB has not yet moved on accelerated depreciation schedules for AI infrastructure. But when $650 billion in annual capex produces shrinking margins and 3-year economic obsolescence, the SEC will eventually ask why income statements reflect 5–6 year depreciation. The regulatory reckoning is forming.
6 — Geopolitical Fracture
TSMC manufactures approximately 90% of the world’s advanced semiconductors — in Taiwan. A geopolitical event in the Taiwan Strait would not disrupt one company. It would simultaneously sever the advanced chip supply for every hyperscaler, every national AI program, every enterprise, and every defense system on the planet that depends on leading-edge silicon. This is not a supply chain risk in the conventional sense. It is a civilizational infrastructure dependency concentrated in the most geopolitically contested geography on earth.
The United States entered the sovereign AI tier at maximum scale with the Stargate Initiative — a $500 billion AI infrastructure commitment by OpenAI, SoftBank, and Oracle with direct White House backing. Stargate is not a technology company initiative. It is a geopolitical response dressed in corporate structure, competing for the same chip supply, energy capacity, and construction resources as every other enterprise infrastructure project. When the U.S. government declares AI infrastructure a national strategic priority, every other buyer in the queue moves down one position.
India is emerging as the most consequential new pole in the global semiconductor landscape. Over $10 billion in government incentives, Tata Electronics entering chip manufacturing, and an explicit industrial policy to reduce Taiwan-concentrated supply chain dependence. India is simultaneously a new supply-side actor, a massive AI demand market, and a jurisdiction with its own data sovereignty regulations affecting any enterprise with Indian operations.
DeepSeek’s achievement in early 2025 — near-frontier model performance at a fraction of assumed compute cost — was not merely a technical breakthrough. It was a direct geopolitical response to chip scarcity: adversarial constraint producing efficiency innovation that the unconstrained Western model had no incentive to pursue. The GPU arms race may be racing toward a ceiling that efficiency innovation is simultaneously lowering from the other side.
The question is no longer just who owns the hardware. It is who controls the geography, the trade policy, the fab capacity, and ultimately the orbital infrastructure that makes the hardware possible — and what happens to every enterprise that assumed those conditions were stable.
The Musk Stack: Vertical Integration at Civilizational Scale
On March 21, 2026, Elon Musk launched Terafab at Giga Texas — a joint venture between Tesla, SpaceX, and xAI described as the most epic chip-building exercise in history. The numbers validate the description.
Terafab targets one terawatt of annual computing output — a 50-fold increase from the approximately 20 gigawatts produced by the entire global semiconductor industry today. Musk’s own statement: “All of the fabs on Earth only provide 2% of what we need for the TeraFab project.”
Physical scale: approximately 100 million square feet — Musk’s own order-of-magnitude estimate, roughly 10x the main Giga Texas building. The facility is a factory-of-factories: designed to manufacture the machines that manufacture the chips, using Optimus humanoid robots as the labor force, targeting 10 million Optimus units annually from the same campus.
Output split tells the real story: 80% directed toward space, 20% terrestrial. The D3 chip is purpose-built for orbital deployment — radiation-hardened, designed for the temperature extremes and cosmic radiation of space. SpaceX has already filed with the FCC for permission to launch up to one million data center satellites. The Starlink terminal network provides the ground connectivity layer. The full orbital compute stack is assembling in real time.
compute target
semiconductor output
order of magnitude
FCC filing, SpaceX
For enterprise infrastructure strategy, the Musk Stack introduces a dimension no conventional cloud migration framework accounts for. The terrestrial compute paradigm every cloud migration plan is predicated on is being treated as a transitional 20% allocation by the most aggressive infrastructure builder in history.
The Access Imperative
Cloud Migration Has Moved from IT to the Boardroom
The conventional cloud migration argument was built on three pillars: cost efficiency, operational agility, and access to managed services. These were technology and operations arguments, owned by CIOs and infrastructure teams. Boards heard them as background noise.
The six forcing functions documented above have changed who owns this decision — and why. And they have added a dimension the conventional framework never addressed: not all hyperscalers are equal, and the gap is widening.
The chip ownership data is unambiguous — Google holds the dominant position in specialized AI chip ownership globally, combining its proprietary TPU fleet with H100 holdings at a scale no other commercial entity matches. Google has been building toward this for a decade. The TPU program, Axion, Trillium, DeepMind, and YouTube’s compute scale give Google an infrastructure depth that its hyperscaler peers are still catching up to.
For enterprises making primary cloud commitments, the question of which hyperscaler is not a commodity choice. It is a strategic bet on which entity will have the infrastructure, the chip access, and the architectural runway to remain competitive through the next hardware cycle — and the data suggests that bet is not symmetric.
The Risk Transfer Is Real — But Not Free
Migrating to cloud infrastructure is the rational response to the six forcing functions. You let the hyperscaler carry the hardware obsolescence risk. You convert capital expenditure into operating expenditure and free your organization to compete on the application layer.
This is correct logic. But it requires clear eyes about what you are taking on in exchange. If the hyperscalers are currently running at a loss on their AI infrastructure, then current cloud pricing does not reflect the true cost of the service. One of three scenarios must eventually resolve this:
Cloud pricing increases as hyperscalers pass the real cost to enterprise customers, repricing the entire value proposition of cloud migration
Hyperscaler consolidation reduces competitive options, concentrates pricing power, and increases lock-in risk
Architectural disruption resets the economics entirely, potentially creating new infrastructure models the current cloud migration playbook does not account for
The enterprise that migrates to cloud on the assumption of stable pricing is making an incomplete risk transfer. They have offloaded hardware risk. They have taken on pricing and dependency risk in its place.
The strategically sound position is cloud pragmatism: use hyperscaler infrastructure to carry the hardware risk you cannot manage, while maintaining the data sovereignty, workflow portability, and architectural optionality that protects you from the risks you are taking on.
The critical insight for enterprises with long-lived application estates — whether built on IBM Power, Oracle, SAP, or proprietary x86 infrastructure — is that the platform decision and the hardware ownership decision are separable. The decades of business logic, data architecture, and workflow integration that define an enterprise’s operational core are not the liability. The physical hardware beneath them is.
The Accounting Time Bomb
What GAAP Hasn’t Caught Yet
Of the six forcing functions, the accounting distortion is the least visible and potentially the most consequential for investors and enterprise decision-makers alike.
The gap is structural: AI hardware with a 3-year economic life is being depreciated over 5–6 years on corporate income statements. This produces a systematic overstatement of earnings during the economically useful period of the asset.
The Earnings Quality Problem
When an asset is depreciated more slowly than it loses economic value, the depreciation charge understates the true cost of maintaining the revenue that asset generates. Income appears higher than it would if the accounting reflected economic reality. Return on capital appears stronger. The business looks more profitable than it is.
For the hyperscalers, at $650 billion in annual AI capex being depreciated over 5–6 years when the economic life is 3 years, the earnings overstatement is not a rounding error. It is a structural feature of current financial reporting.
Brightman’s analysis makes the mechanism explicit: the H100 GPU generating 137% ROI in year two is generating negative 34% ROI by year four. But the depreciation schedule spreads the cost across years one through six. The income statement in years four, five, and six is charging one-sixth of the acquisition cost against revenue the asset can no longer efficiently generate.
The Regulatory Reckoning
The FASB has not yet moved on accelerated depreciation for AI infrastructure. But when $650 billion in annual capex produces shrinking margins and 3-year economic obsolescence, the SEC will eventually ask why income statements reflect 5–6 year depreciation.
The companies that will be valued most richly in the AI era are not the ones with the most hardware. They are the ones that have converted hardware risk into application advantage — using cloud infrastructure as the carrier of physical obsolescence while deploying capital against the software, data, and workflow layer where durable competitive advantage actually lives.
The Sovereign Enterprise
What Enterprises Should Actually Do
The six forcing functions do not lead to a simple prescription of “migrate everything to cloud.” They lead to a more strategically demanding position: understand what you are transferring, what you are retaining, and where your actual competitive advantage lives.
The sovereign enterprise is not the one that owns the most hardware. It is the one that owns the decisions that matter.
Transfer the Hardware Risk
The hardware obsolescence cycle, the supply chain competition, the architectural uncertainty, the operational overhead — none of these create competitive advantage for any enterprise whose business is not the infrastructure itself. Transfer them. Convert capex to opex and redeploy the capital against the application layer.
The platform decision and the hardware ownership decision are separable. Make them separately.
Retain Data Sovereignty
The single greatest strategic risk in full cloud migration is not pricing. It is data dependency. Retain the data. Maintain portability architectures. Build your AI capabilities on your own data assets, not on data models trained by your infrastructure vendor. The competitive advantage in the AI era is domain-specific intelligence — and that intelligence is only as sovereign as the data that generates it.
Invest in the Application and Workflow Layer
The companies generating durable returns from AI are not the ones building data centers. They are the ones deploying AI against specific domain knowledge, proprietary workflows, and accumulated business logic that general-purpose models cannot replicate. This is where enterprise investment should concentrate.
Maintain Architectural Optionality
Do not bet your infrastructure strategy on the permanence of the current compute architecture. Avoid deep integration with any single hyperscaler’s proprietary AI infrastructure. Build on open standards where possible. The switching cost you accept today is the pricing leverage you surrender tomorrow.
Watch the Orbital Frontier
The next infrastructure tier is forming — and it exists entirely outside the geopolitical constraints that define the current compute landscape. SpaceX has filed with the FCC for permission to launch up to one million data center satellites. Terafab is manufacturing the D3 chip specifically designed for orbital radiation environments. Microsoft Azure Space and AWS Ground Station are building ground connectivity infrastructure.
The orbital compute tier is not a future roadmap item. It is being built now, by the most capable launch and manufacturing organization on the planet, at a scale that dwarfs anything previously attempted.
Every forcing function in this white paper operates within the assumption that compute infrastructure is terrestrial — subject to geography, trade policy, fab concentration, and energy grid constraints. Orbital compute breaks all four assumptions simultaneously.
Sovereignty is not about where your hardware sits. It is about whether you control your data, your workflows, your decisions, and your competitive position — regardless of who carries the infrastructure risk beneath you.
Source Is Sovereignty
The Capex Reckoning is not a prediction about what might happen to AI infrastructure investment. It is a description of what is already happening — documented in the financial analysis of Research Affiliates, validated by the investment and technology community at the highest levels of the field, and visible in the supply chain dynamics reshaping chip procurement globally.
The six forcing functions — economic obsolescence, supply foreclosure, operational drag, architectural uncertainty, accounting distortion, and geopolitical fracture — are not independent risks to be managed separately. They are a convergent structural shift that is ending the era of private enterprise hardware ownership as a viable competitive strategy.
The SaaS ARR reckoning happened when the market realized that recurring revenue was not as safe as priced.
The Capex Reckoning is happening now — as the market begins to realize that infrastructure investment is not as productive as booked.
The question for every enterprise with technology infrastructure on its balance sheet is not whether this reckoning is coming. It is whether they are positioned on the right side of it when it arrives.
Source is Sovereignty. The enterprises that own their data, their decisions, and their domain intelligence — regardless of who owns the hardware — are the ones that will compound through this transition.