King Sam and AI Circularity: How Concentrated Bets on OpenAI Create Systemic Risk
Vendor financing in infrastructure is normal. The scale and concentration in AI are unprecedented. So is the dependence on one actor’s durable growth moat.
Hundreds of billions in AI infrastructure financing now flows through circular deals where customers are suppliers, suppliers are investors, and all roads lead to OpenAI. The greatest risk isn’t that AI fails—it’s that the technology succeeds while the single company everyone has backed fails to build a defensible moat, triggering contagion that strands capital and stalls the buildout before the transformation completes. You can be bullish on AI and bearish on this financial architecture. They’re separable propositions.
The past month has witnessed a series of deals so circular, so concentrated, and so unprecedented in scale that they’ve transformed the AI boom from a technology story into a financial stability question.
At the center sits OpenAI. Not just as a symbol of artificial intelligence’s potential, but as the potential single point of failure in a system where hundreds of billions of capital are flowing through self-reinforcing loops.
I argued three weeks ago in my analysis of AI valuation dynamics that the bubble debate may have some valid short-term points but also misses part of the long-term view. But those discussions are separate from these recent developments.
What we’re witnessing now is the emergence of a circular financing architecture that is so brazenly shocking to many that it has gone beyond the realm of mere critique into spawning memes.
The actors involved in this capital carousel seem to have justified these deals by conflating two distinct beliefs:
The first is robust: AI represents a structural economic shift comparable to railroads or the internet. Here, I agree.
The second is fragile: OpenAI possesses a durable moat that will capture the lion’s share of this revolution’s value. Here, well, let’s just say I have my doubts.
In recent months, there has been much handwringing over reports of AI’s ROI, or runaway valuations, or costs of infrastructure, or other various existential fears that have raised doubts about whether AI will deliver on the hype to justify the investment.
However, it’s possible we might not get to find out. Because the greatest risk is not that the technology fails or that the value is not realized. It’s that the single company everyone has backed fails to justify the capital flowing through it, triggering a contagion that stalls the buildout of critical infrastructure and strands trillions in investment before the transformation even completes.
The Architecture of Circularity
Consider the deals announced over the past 30 days:
Broadcom and OpenAI formalized their partnership on Monday to jointly build and deploy 10 gigawatts of custom AI accelerators, marking a major expansion of their 18-month collaboration and sending Broadcom shares up nearly 10%.
Nvidia intends to invest up to $100 billion in OpenAI through non-voting shares while simultaneously supplying at least 10 GW of GPU systems.
OpenAI will use Nvidia’s capital to purchase Nvidia’s chips—a portion of the investment literally returns to its source.
AMD will supply hundreds of thousands of MI450 GPUs and build dedicated gigawatt-scale facilities. In exchange, OpenAI receives an option to acquire 160 million AMD shares at one cent each—approximately 10% of the company—potentially generating over $100 billion in revenue for AMD across four years.
Oracle’s rumored $300 billion “Stargate” deal promises 4.5 GW of data center capacity over five years. To deliver this, Oracle plans to purchase roughly 400,000 Nvidia Blackwell chips worth $40 billion. Oracle’s shares surged 40% on the announcement despite no formal filing.
Meanwhile, Nvidia holds an equity stake in CoreWeave, the cloud startup from which Microsoft—OpenAI’s largest investor and Azure provider—rents AI infrastructure to support OpenAI’s compute needs.
Map these relationships and a pattern emerges: customers are suppliers, suppliers are investors, investors are customers. The same entities occupy every role simultaneously. Capital flows in circles, making it nearly impossible to distinguish genuine demand from financial engineering.
This is vendor financing with equity kickers, creating cross-dependencies that obscure true economic value.
Beyond recent headline deals, Nvidia, CoreWeave, and some of the actors involved also invest in AI startups. Big Tech’s relationships with Anthropic mirror aspects of this structure. But OpenAI remains the loudest signal, the clearest epicenter. Everything flows through Sam Altman’s company, making it the keystone upon which the entire structure depends.
Why Compute Became Strategy
To understand why this architecture has even emerged, we first need to understand why AI’s appetite for compute has become insatiable, and why that appetite represents a strategic choice.
Scaling laws provide the mathematical foundation here. Research by OpenAI, MIT, and IBM demonstrates that model performance improves predictably as a power law of parameters, data, and compute. Each frontier model now costs roughly 10X of its predecessor to train. This is the price of maintaining technical leadership in a race where second place offers little consolation.
But as I explored in “Two Tales of Compute,” we must distinguish between training compute and inference compute. Training is the upfront capital expenditure - thousands of GPUs running continuously for weeks to create a foundation model. Inference is the operational expense - the per-query cost of actually serving that model to users. Training compute is where the compute arms race lives. Inference compute is where the economics ultimately resolve.
This distinction matters because scaling laws describe training dynamics, not commercial viability.
Training compute is built on the paradigm of “bigger is better”. Today’s SOTA models produced by OpenAI, Anthropic, and other key LLM players are trained through massive training rounds consuming huge amounts of compute. Most researchers agree that, as of today, the compute-heavy path remains necessary given the current state of technology.
However, alternative architectures are starting to emerge, both for training compute and inference compute. Mixture of Experts (MoE) models, exemplified by DeepSeek and Qwen3, activate subsets of parameters rather than entire networks, reducing inference compute needs by 50-70%.
If MoE architectures mature faster than expected, or if quantum computing breakthroughs make GPUs obsolete, these assets become stranded before depreciation schedules run their course.
Also, as bigger models yield diminishing returns, the training-heavy compute paradigm finds itself challenged.
Hence, the circular deals aren’t just concentrated. They’re concentrated on a particular technological approach that might not age well.
The Missing Metric: Compute Payback
I conclude both from my research and from my recent involvement in recent rounds in the LLM space that a coherent framework for evaluating whether training compute expenditures will ever pay for themselves is clearly missing.
I call this Compute Payback. This is the most critical metric the industry isn’t tracking, perhaps deliberately.
In SaaS businesses, Customer Acquisition Cost (CAC) provides discipline. You evaluate whether the cost of acquiring customers will be recouped through lifetime value. The model either works or it doesn’t. For AI labs, training compute serves the same function. It’s the upfront cost of building a product (the model) that must generate sufficient revenue before obsolescence.
But unlike CAC, training compute doesn’t scale linearly with customers. It’s front-loaded and speculative. You invest $5 billion to train GPT-5, then hope it generates enough revenue through API calls, enterprise licenses, and subscriptions before GPT-6 becomes necessary. If competitors commoditize your model’s capabilities faster than you can monetize them, or if the next generation’s training costs escalate faster than your revenue growth, the ROI never materializes.
Training compute should follow a clear payback curve: initial investment, revenue ramp during the model’s competitive window, payback before the next generation’s training costs hit.
Below is a hypothetical illustration:
Consider OpenAI’s current trajectory. In the first half of 2025, the company generated $4.3 billion in revenue, spent $6.7 billion on R&D, and burned $2.5 billion in cash. Full-year targets sit at $13 billion in revenue against $8.5 billion in losses. These figures extend ominously through 2030 projections. Private markets value the company at $500 billion based on the assumption these losses are temporary, that each model generation will eventually recoup its training costs with margin to spare, and that, of course, the Company will have found a “path to profitability” before 2030.
The question of whether “this time is different” deserves scrutiny.
If OpenAI trains GPT-5 for $10 billion and GPT-6 costs $100 billion, revenue must scale proportionally. The circular financing architecture exists precisely because capital markets have become uncertain about this question:
Nvidia isn’t making a pure investment—it’s securing a customer.
AMD isn’t just selling chips—it’s taking equity in lieu of cash.
Oracle isn’t simply providing cloud services—it’s subsidizing them at 14% gross margins (versus 70% software margins), betting on eventual profitability.
This is the rational lens through which to evaluate systemic risk. If Compute Payback stays on track—if each model generation recoups training costs before the next one arrives—the system remains sound.
But if the payback period exceeds the model’s competitive lifespan, or if training costs compound faster than revenue, we’re witnessing the most expensive science experiment in history, funded by the capital markets’ temporary suspension of disbelief.
OpenAI’s Fragile Moat: The Keystone Problem
The entire circular architecture rests on a single assumption: that OpenAI can build a defensible, durable business faster than competition erodes its advantages. And that by 2030 it will be profitable, or at least cash flow positive.
Every valuation in the chain depends on this assumption. Let’s examine OpenAI’s actual moats:
The brand is not the moat. As I’ve argued previously on Decoding Discontinuity, consumer brand recognition of ChatGPT is (very) powerful. But I do not view this as a source of a “durable growth moat.”
Technical prowess alone provides insufficient defense when Meta, Google, Microsoft, and Amazon are building competitive models while simultaneously developing custom silicon to reduce dependency on Nvidia.
First-mover advantage is eroding. Foundation models are increasingly commoditized. Open-source alternatives improve monthly. The gap between frontier models and open-weights models has compressed from 18 months to 6 months, to, in some benchmarks, weeks.
This leaves OpenAI with one plausible path to a durable moat: successfully unbundling and rebundling applications into agentic systems at scale.
This is the vision: AI agents that handle complex multi-step tasks across entire processes, creating lock-in through distribution and integration depth rather than model superiority alone.
For now, this remains speculative and unproven. But clearly, there is a lot of movement here.
As Michael Spencer wrote this week in “AI Supremacy” as part of his overview of the State of AI Report 2025 by Air Street Capital’s Nathan Benaich: “Recent developments in App SDKs in ChatGPT mean it could evolve into a Super app. A lot of Generative AI’s real-world integration rests on the promise of these AI agents.”
Meanwhile, last week at OpenAI’s annual DevDay conference in San Francisco, CEO Sam Altman announced that Figma’s application is integrated into ChatGPT. Someone using ChatGPT could call on Figma directly and it would respond and perform a task.
Figma’s stock soared on the announcement, evidence that the market understands the potential agentic and orchestration vision. But real implementation at scale remains in the distance.
Beyond the moat question lies an existential dependency that OpenAI is clearly addressing with urgency: access to compute. If OpenAI loses access to training infrastructure—whether through capital constraints, supplier issues, or power availability—everything collapses instantaneously. The company must continuously raise capital to fund each model generation or risk being leapfrogged. Should capital markets sour, the flywheel stops. This vulnerability is shared by Anthropic and every other frontier lab, but OpenAI’s position as the circular architecture’s keystone makes its failure catastrophic for the entire network.
Three Economic Distortions
The circular architecture produces three distinct consequences, each amplifying systemic risk.
1. It Enables Unsustainable Compute Buildout
The circular deals translate market enthusiasm into infrastructure—successfully, for now. OpenAI secures compute that would otherwise be unattainable given cash flow constraints. The technology imperative discussed earlier gets funded.
But as I examined in my “Atoms Meets Bits“ analysis, physical constraints impose hard limits that financial engineering cannot overcome. Sequoia Capital’s research shows the AI capital-revenue gap has ballooned from $125 billion to $500 billion. Private credit now finances approximately $50 billion quarterly in AI infrastructure, and some analysts are worried about the increasingly complex and higher-risk debt vehicles being used to finance that spending.
Power constraints are acute. AI’s electricity demands could consume 20% of US grid capacity additions per Goldman Sachs research. OpenAI alone faces $1.7 billion annually in power costs. These are thermodynamic realities that constrain buildout regardless of capital availability. Not financial obstacles.
The boosts are real, but time-limited. We’re in a window where circular financing accelerates infrastructure deployment, but the window closes as physics reasserts itself.
2. It Amplifies Bubble Dynamics
Circularity isn’t novel. Classic vendor financing catalyzed previous bubbles. The pattern is familiar from the 2000s telecoms: equipment makers extended credit to customers, inflating revenues while disguising demand weakness. When customers couldn’t pay, the vendor’s receivables evaporated, triggering cascading failures.
Nvidia’s $100 billion OpenAI investment, recycled as chip purchases, replicates this. The Guardian noted the parallel; investor James Anderson called it “uncomfortably reminiscent” of dot-com excess. But there’s a crucial difference: telecoms funded “the internet” broadly. Today’s circular deals fund a single company with an unproven business model.
3. It Systematically Distorts Valuations
The circular architecture makes it impossible to determine intrinsic value separate from the OpenAI assumption.
Oracle rallied 40% on Stargate rumors, yet The Information revealed AI cloud operations generate 14% gross margins on $900 million quarterly revenue—far below legacy software economics. Oracle is subsidizing OpenAI’s compute by purchasing Nvidia GPUs and leasing them below cost, expecting scale to eventually yield profits. If OpenAI’s usage falls short, Oracle holds depreciating hardware and underutilized data centers.
Two accounting concerns warrant scrutiny: First, as the Financial Times recently noted, the validity of Revenue Performance Obligations (RPOs) from long-term contracts lacks verification in formal filings. Second, GPU depreciation schedules are too conservative. These assets should be depreciated over 18-24 months to reflect training compute’s competitive lifespan, not 5-6 years, assuming secondary use in inference. This issue—which I had flagged for CoreWeave previously—now extends to Oracle and private market players like Crusoe.
AMD could generate $100 billion from OpenAI over four years, but the deal structure reveals fragility. OpenAI’s option to acquire 160 million AMD shares at one cent—approximately 10% of the company—represents vendor financing through equity rather than cash. AMD must now ramp manufacturing and supply chains to deliver hundreds of thousands of MI450 GPUs. Any delay or demand shortfall jeopardizes projections. AMD’s valuation hinges on translating this flagship deal into sustained, diversified growth while maintaining margins against aggressive competition.
We’re witnessing inflated valuations driven not by profit expectations but by circular financing that obscures genuine demand. The underlying businesses may have real potential, but current prices embed heroic assumptions about a single customer’s trajectory.
Why This Time Is Different (And Why It Isn’t)
The bull case rests on legitimate foundations. For the highlighted public companies, cash flows are demonstrably stronger than those of the dot-com bubble days. These are profitable enterprises with real businesses, not pure speculation. AI delivers measurable productivity gains in code generation, content creation, and drug discovery. If you believe AGI is achievable, massive upfront investment makes sense.
Strategic positioning could create durable monopolies. As stated in Alisdair Nairn’s book “Engines that Move Markets,” the only certain path to sustained profits in new technology is monopoly protection. OpenAI is racing to build that monopoly while simultaneously depending on hundreds of billions in external capital to stay competitive. This is a precarious position for the linchpin of a circular financial system.
But even accepting these arguments doesn’t eliminate concentrated risk. The technology can succeed—the railroads can be transformative—while individual railroad companies still fail spectacularly. In the best-case scenario, we might see a duopoly: imagine Google acquires Anthropic to compete with Microsoft-backed OpenAI. Does a two-player market sustain current valuation levels across the entire supply chain?
The counter-arguments are substantively strong. They’re also irrelevant to the specific companies in the circular loop. This is the key insight: you can be bullish on AI and bearish on the financial architecture funding it. They’re separable propositions.
Several triggers could expose the architecture’s fragility:
Concentration contagion: If the same firms are simultaneously customers, suppliers, and investors, then weakness propagates instantly. If OpenAI misses Oracle payment obligations, Oracle’s revenues decline, Nvidia receives fewer orders, AMD loses its anchor customer. Cross-shareholdings obscure true demand until default forces recognition. This extends beyond the obvious names—ramifications ripple through the S&P 500.
Geopolitical fracture: The modern tech ecosystem is built on a global supply chain of physical goods that depends heavily on manufacturing and critical raw materials from China. Those global dependencies pose an ongoing risk amid the current political and trade disputes.
Technological substitution: Breakthroughs in alternative compute architectures could make GPU-centric infrastructure obsolete. For instance, as noted above, if the Mixture of Experts efficiency gains, or another breakthrough such as the hypergrid model proposed by energy startup Fermi actually scale, would this create momentum for distributed compute to challenge datacenter REITs dependent on centralized grid power? Yes, this is hypothetical for the moment, but companies locked into multi-year hardware commitments face stranded assets.
Governance and regulatory intervention: OpenAI’s November 2023 board crisis demonstrated governance fragility. Regulatory scrutiny of Nvidia’s market dominance increases. Inadequate oversight could trigger capital flight reminiscent of crypto boom-bust cycles.
The AI economy exhibits extreme volatility—sentiment shifts produce 30%+ single-day moves. In this environment, any catalyst becomes corrosive quickly.
Two Beliefs, One Failure Mode
Ultimately, this is a story about conflated beliefs.
The first belief is robust and likely correct: artificial intelligence represents a fundamental economic transformation comparable to railroads, electricity, or the internet. The infrastructure being built will power productivity gains for decades. The revolution is real.
Also, this belief doesn’t require AGI to be correct. AI can be deeply transformative through agentic systems, enterprise integrations, and inference compute that substitutes for knowledge work—generating enormous value even without artificial general intelligence. The key threshold is whether AI can reliably replace labor at a marginal cost below human wages. That doesn’t require superintelligence.
The second belief is fragile and possibly incorrect: that OpenAI possesses a unique, durable moat to capture the lion’s share of this revolution’s value. That its specific technical roadmap—massive pre-training, compute-intensive scaling—will dominate. That it can build defensible agentic systems faster than the competition commoditizes foundation models.
The circular financing architecture dangerously conflates these beliefs. Every dollar flowing into Nvidia, AMD, and Oracle embeds an implicit bet not just on AI, but on OpenAI specifically. The concentration is unprecedented. The dependencies are opaque. The valuations assume success.
This creates a single point of failure. The greatest risk is not that the technology fails—it’s that the technology succeeds while the company everyone backed fails to justify the capital flowing through it. The railroads could be transformative even as specific railroad companies go bankrupt, stranding investors who confused the infrastructure with the operator.
We need the AI buildout. We need the infrastructure. But we should be extraordinarily careful about letting that infrastructure’s financing depend on a single company’s ability to thread an impossibly narrow needle: building a defensible moat while burning billions quarterly, maintaining technical leadership while competitors gain ground, and converting expensive compute into profitable services before capital markets lose patience.
Right now, the entire architecture assumes Sam Altman’s OpenAI can do all of this simultaneously. That’s not a technology bet. That’s a single point of failure masquerading as a revolution.
The difference between transformative infrastructure and catastrophic capital destruction is whether the architecture can survive its keystone’s potential failure.
Based on current dependencies, the answer is uncomfortably unclear.