The AI Infrastructure Boom Faces the Jevons Fallacy
Jevons paradox tells us that increased resource efficiency drives higher total consumption. A powerful rationale, but one that may be faulty when applied to physical infrastructure.
I’m Raphaëlle D’Ornano, founder of D’Ornano + Co., an international financial analysis and strategic advisory firm. We pioneered Advanced Growth Intelligence (AGI)—an analytical framework that helps investors and businesses unlock value by decoding the discontinuities reshaping industries, particularly those driven by generative AI. My work bridges the gap between cutting-edge AI research and real-world business impact, enabling higher returns through actionable insights. Want to talk about GenAI and Discontinuity? Just reach out.
The artificial intelligence infrastructure boom may be building digital cathedrals on shifting sands.
The unprecedented expansion in AI infrastructure - with global spending on data center systems already reaching $260 billion in 2024 and projected to exceed $700 billion by 2030 per BlackRock - faces significant technological disruption risks that warrant careful consideration.
A wave of technological breakthroughs in AI efficiency presents an opportunity to reconsider our assumptions about infrastructure scaling. Now is the moment for everyone to take a step back and examine the fundamental premises behind the AI infrastructure buildout.
💡 Key Points:
✅ Current infrastructure investments may benefit from lessons learned during the 2000s telecom expansion
✅ The Jevons paradox may have different implications for physical AI infrastructure versus software efficiency
✅ Emerging AI architectures could reshape infrastructure requirements
✅ Investment strategies should balance scale with technological adaptability
The Infrastructure Discontinuity
Let's start with a historical perspective.
While the late 1990s are remembered for the dot-com boom, the amount raised and lost by internet IPOs was tiny compared to the trillions of dollars that poured into the telecom sector. Established giants like AT&T and WorldCom took on billions in debt while upstarts like Exodus Communications -- which built 44 data centers in just a few years – raised almost $10 billion.
This was all justified by a simple rationale: internet traffic was doubling every 100 days. WorldCom made this statement, and it became widely echoed even though the reality was closer to every year.
When the dot-com bubble burst, the telecom sector faced significant challenges. Of the $7 trillion decline in stock market valuations between 2000 and 2002, about $2 trillion was attributed to telecom companies. In addition, 23 telecom companies – including Exodus -- went bankrupt. The FCC found that by 2007, 73.4 million KM of fiber optic cable had been laid but that 48 million KM was unused – a stranded asset dubbed "dark fiber."
Two decades later, it's worth examining whether current infrastructure growth assumptions merit similar scrutiny.
Today's dominant AI models demand vast arrays of power-hungry GPUs. The $500bn Stargate project exemplifies this approach.
Then came news of DeepSeek's innovation in AI architecture, which suggests a different and potentially more efficient approach to infrastructure. The Chinese start-up's innovation — which achieves comparable performance to current models while consuming just 30 percent of traditional computational resources — prompted significant market attention that affected valuations across the sector.
AI infrastructure investors responded with a compelling framework: the Jevons paradox, the principle that increased resource efficiency drives higher total consumption.
As Microsoft CEO Satya Nadella wrote: "Jevons paradox strikes again! As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can't get enough of." Last week, Morgan Stanley released a report that reinforced this view, projecting that a 90% drop in computing costs will unleash unprecedented AI adoption.
This framework has merit: the less something costs, the more people will use it. However, when it comes to AI infrastructure, this principle warrants careful examination.
The Jevons Consideration
The DeepSeek breakthrough represents more than an incremental improvement in efficiency. It demonstrates an alternative approach to building AI systems, with implications for the physical infrastructure supporting them.
DeepSeek has introduced new possibilities in the relationship between compute resources and model performance. It suggests that future advances may come from architectural innovation as much as raw computing power.
If proven successful, the efficiencies demonstrated in DeepSeek's open-source model could influence how larger companies approach model training, potentially affecting data center demand for training infrastructure. While this could accelerate inference compute adoption as token prices decrease, the infrastructure requirements for inference differ from those of model training.
The market response has prompted infrastructure investors to consider an important question: how might next-generation AI affect physical infrastructure requirements?
The industry finds itself at an inflection point reminiscent of the transition from mainframe computing to distributed systems. Just as that shift transformed computer center requirements, today's investments in GPU-optimized infrastructure may need to evolve with technological advances.
While efficiency gains in computing naturally translated to increased usage, the same may not apply uniformly to the physical infrastructure layer. Power systems, cooling architectures, and real estate configured for today's GPU farms may require adaptation for emerging computing paradigms.
The Jevons paradox likely applies to the software layer, but physical infrastructure faces different considerations. While future demand for AI infrastructure seems certain, the timing and technical requirements may evolve. As NVIDIA's Jensen Huang noted, "Our systems are progressing way faster than Moore's Law."
The Infrastructure Evolution Risk
The AI data center market encompasses multiple segments, each with distinct considerations.
Hyperscale facilities exceeding 100MW represent substantial investments in current AI architectures, with extensive power and cooling systems optimized for dense GPU clusters. Meanwhile, specialized 10-20MW facilities from AI infrastructure companies deemed "neoclouds" target specific AI workloads with optimized designs. Traditional data center REITs have entered AI workloads through retrofits and expansions. New entrants like Stargate propose large-scale AI-specific infrastructure.
These approaches share a common assumption: that AI computing will continue to require substantial arrays of high-powered GPUs, each consuming up to 700W. These facilities, requiring 2-3 years to construct and expected to operate for decades, may need to adapt to evolving architectural requirements.
The implications vary by scale. Hyperscale centers may offer greater flexibility for adaptation, though their specialized cooling and power infrastructure require careful consideration. Smaller AI-focused facilities, despite lower capital requirements, may need to evolve beyond pure infrastructure plays.
The potential for infrastructure evolution extends beyond capacity planning. Today's data centers must consider cooling systems, power purchase agreements, and real estate footprints in light of evolving computing needs. This represents not just a real estate investment consideration but a technological infrastructure opportunity that could develop rapidly if approaches like DeepSeek's prove viable.
The trend extends beyond DeepSeek. Last week, open-source-based Mistral AI released its "Le Chat" model, reportedly trained at 20% of the cost of ChatGPT. DeepSeek has become one of the top downloads on Hugging Face, which is working to make more of its components and data sets open source.
These developments could influence the broader data center industry, not just AI-specialized facilities. While AI-specialized data centers face immediate considerations, the entire industry benefits from reassessing assumptions about future power requirements, facility design, and location strategy.
The industry may need to address external factors as well. A new Goldman Sachs report notes that the infrastructure boom might be influenced by factors such as power generation capacity or transmission system requirements. Late last year, the US Federal Energy Regulatory Commission blocked a proposal for Amazon to use nuclear power for a Pennsylvania data center. Many local communities have begun to examine data center proposals' resource implications.
Encouragingly, many hyperscalers are pursuing greater efficiencies. Amazon recently announced innovations in power, cooling, and hardware design for more energy-efficient data centers. Meta is highlighting water efficiency. Microsoft unveiled a zero-water evaporation design. Google Cloud has embraced more modular data center designs.
Some of these moves suggest preparation for evolution. For now, though, the Jevons paradox framework has supported continued infrastructure investment. Meta announced it is increasing capex spending in 2025 to $65bn from $35bn in 2024. As The New York Times wrote: "An apparent breakthrough in efficiency from the Chinese start-up did not make tech's biggest companies question their extravagant spending on new data centers."
Investment Implications
The market faces an opportunity to price technology evolution in what has been viewed primarily as a real estate play. While demand for AI computing will undoubtedly continue its exponential growth, the nature of that demand may evolve. Success in this market will require moving beyond traditional metrics of location quality and tenant credit strength to understand the technological adaptability of infrastructure investments.
For institutional investors, the opportunity lies not just in participating in the AI infrastructure boom, but in accurately pricing the potential for technological evolution. Those who can build flexibility into their infrastructure thesis while properly valuing adaptation potential will find significant opportunities in what remains one of the most important infrastructure buildouts of our generation.
The leaders in this space will be those who recognize that the AI infrastructure story isn't merely about scale - it's about technological adaptation in a rapidly evolving landscape. The industry would benefit from considering architectural evolution when committing capital to infrastructure designs.