Follow the CapEx

$745B spent. $22B earned.
Every dollar of cumulative 2023–25 AI capital expenditure — from spender → silicon → current workload — and where the rest went. The balance-sheet mirror of Follow the Revenue. Anchored from NVIDIA DC revenue: $356B cumulative (Tier 1A). Cross-checked against Microsoft, Google, Meta, Amazon 10-K CapEx disclosures.

Cumulative 2023–25 AI CapEx · Balance-sheet view · Current state of each asset

Scroll horizontally to explore the Sankey, or rotate to landscape.
Current asset state: Inference (Paid) Inference (Free Tier) Inference (Ad Platform) Model Training Idle In build / in transit
Click any node to isolate its flows. Click again or the background to reset.
So what does the Sankey tell us?
3. Can AI revenue close the gap? Total AI CapEx hit ~$380B in 2025 alone. Today's AI customer revenue is ~$17.5B/yr.
QuestionWhat it requiresVerdict
Can revenue compound fast enough?3–4× annual growth to cover depreciationNo $17.5B rev vs $41B dep
Do ad/cloud workloads cover their share?Existing business models justify the CapExYes ~$170B self-funding
Will CapEx growth flatten?New purchases stop growing so dep stabilisesNo FY27 $368B, accelerating
Jensen's $1T target: cumulative NVIDIA AI-chip revenue through CY2027 = ~$1.07T at projected rates. That's $1T of silicon that needs to earn its keep over a 3–4yr depreciation window.
The Sankey above shows the stock (cumulative CapEx). The chart below shows the flow (annual) — and where it's heading.
The Three-Stage Lag
Why the gap widens before it closes
CapEx is when the money leaves the bank account — NVIDIA gets paid first and most. Depreciation is when that money hits the P&L, spread over 3–4 years of useful life. AI revenue is when the bet pays back. Three conveyor belts, each one trailing the last.
The red–green gap is the convergence problem. Revenue is compounding fast but the depreciation wave hasn't crested yet. Convergence requires CapEx growth to flatten AND revenue to keep compounding — optimistically 2030+, if ever for pure AI revenue alone.

Structure notes

  • LHS = 8 source buckets, cumulative 2023–25 CapEx ($745B). Anchored from NVIDIA DC $356B (Tier 1A) + cross-checked against MSFT/GOOG/META/AMZN 10-K disclosures.
  • Middle = 5 nodes — what the money physically bought. NVIDIA GPU $240B cross-checks against calendarised NVIDIA DC revenue (Tier 1A).
  • RHS = 6 nodes = current physical state of each asset (balance-sheet view, full purchase price):
    • Inference (Paid) ($40B) — fleet currently serving paid API + subscription queries
    • Inference (Free Tier) ($20B) — fleet serving free-tier ChatGPT, Gemini in search, Meta AI
    • Inference (Ad Platform) ($170B) — fleet running ads, feed ranking, search, reco. The real chip-eater.
    • Model Training ($84B) — fleet currently dedicated to training runs
    • Idle ($50B) — commissioned, powered, no current workload
    • In build / in transit ($240B) — CapEx committed, not yet commissioned. Mostly 2025 DC shell + substations.
  • Bridge to Follow the Revenue: Paid + Consumer fleet ($60B) → generates ~$14B/yr COGS via depreciation + hosting + electricity. Ratio ~4.3× = ~3.5yr asset life + operating overhead.
  • Ad Platform justification: Meta/Google/MSFT don't need inference revenue to justify this CapEx — their ads/search/cloud revenue already covers it. That $170B is self-funding through existing business models.
Key Assumptions
AssumptionValue usedSource / rationaleTier
NVIDIA DC revenue (cumulative 2023–25) $356B FY24 $47.5B + FY25 $115.2B + FY26 $193.7B — quarterly earnings 1A
NVIDIA revenue split (GPU vs networking) ~85% / ~15% NVIDIA segment reporting; networking = InfiniBand + NVLink 1B
Silicon as % of total AI CapEx ~55% Industry rule of thumb; cross-checked against hyperscaler 10-K CapEx vs known GPU purchases 2A
GPU useful life (depreciation period) 3.5–4 years MSFT/GOOG extended from 4→6yr for servers but GPU-specific life shorter; Meta uses 5yr blended 2B
Commissioning lag (purchase → production) 6–18 months DC construction timelines; substation permitting is the critical path 3A
AI-attributable CapEx method Growth above 2022 baseline Hyperscalers don't cleanly split AI vs non-AI CapEx; using pre-AI-boom baseline as proxy 3A
Inference fleet → annual COGS $14B/yr From Follow the Revenue 2025 inference spend; cross-checks at ~4× ratio to fleet value 2A
Ad Platform fleet allocation ~$170B Meta ($55B GPU + ads infra), Google (TPU fleet for search/ads), MSFT (Bing/Copilot). Largest single workload. 3B
China smuggled NVIDIA GPUs 474K H100e Epoch AI tentative estimate; export controls make this inherently uncertain 3C
Idle compute (utilisation gap) ~$50B Residual after allocating to known workloads. Public utilisation data is scarce — CoreWeave S-1, earnings commentary 3C
Tier key: 1A/1B = directly sourced from filings or earnings. 2A/2B = derived from sourced data with clear methodology. 3A/3B/3C = modeled estimates with stated assumptions.