Decoding Discontinuity

Decoding Discontinuity

Reading the Tea Leaves: What AI Capital Intensity Actually Reveals

As AI capex soars, the real signal isn't how much firms spend but what they buy. I map out 3 scenarios for AI compute by 2027: inference dominance, training bifurcation, or a post-transformer reset.

Raphaëlle d'Ornano's avatar
Raphaëlle d'Ornano
Dec 09, 2025
∙ Paid
Photo by Evie S. via Unsplash

Amid the “Bubble” and “Depreciation” debates, current financial disclosures fall short. Companies lump “AI Capex” together as if all infrastructure is equivalent - it’s not. Training hardware faces rapid 18-month obsolescence, while inference setups can deliver returns for 4-5 years. Without better transparency on workload mix, customer concentration, custom silicon, and utilization, investors are navigating $400B+ decisions with incomplete data. Those who adapt will thrive; missteps could lead to major write-downs or worse. By 2027, the divide will be clear.

Keep reading with a 7-day free trial

Subscribe to Decoding Discontinuity to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2026 Raphaëlle d'Ornano · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture