Two Tales of Compute: The Battle for AI's Operational Future (Part 2)
Training compute sets the frontier, but inference compute determines profitability. In Part 2, I examine efficiency breakthroughs, vertical integrations, and the AI boom's fragile foundations.

In Part 1 of this compute series, I examined how Oracle’s historic $244 billion single-day market cap surge signaled a fundamental transformation in AI economics. The catalyst was a staggering $455 billion compute backlog anchored by OpenAI’s $300 billion commitment. This highlighted how AI has shattered the Moore’s Law paradigm that governed technology for six decades.
Where Moore’s Law promised exponentially cheaper compute every two years, AI’s transformer architecture demands exponentially more expensive compute for linear capability improvements. GPT-5’s estimated $500 million training cost and projections of 10x increases for next-generation models illustrate this inversion: we’ve moved from exponential improvement at declining costs to exponential costs for incremental gains.
This shift has created an oligopolistic training compute market where access to massive, synchronized GPU clusters that require 100+ MW data centers and billions in capital determines…
Keep reading with a 7-day free trial
Subscribe to Decoding Discontinuity to keep reading this post and get 7 days of free access to the full post archives.

