Navigating Neocloud Discontinuity: CoreWeave S-1 Teardown
CoreWeave has filed for its IPO. The company is a critical case study in the neocloud category – specialized infrastructure providers optimizing for AI workloads.
D’Ornano + Co. Insights identifies and analyzes technological Discontinuities — such as the one being driven by generative AI — that fundamentally reshape market dynamics and create asymmetric return opportunities for institutional investors. This month, Insights clients get exclusive access to a 21-page report that applies our pioneering Advanced Growth Intelligence (AGI) analytical framework to the prospectus of CoreWeave, a landmark IPO in the generative AI space.
Part 1. Neoclouds: The Critical Infrastructure Layer Powering the AI Revolution
The Emergence of a New Cloud Paradigm
The artificial intelligence revolution has catalyzed a fundamental shift in cloud computing requirements, giving rise to a new category of specialized infrastructure providers: neoclouds. Unlike traditional hyperscalers (AWS, Azure, GCP) that were designed primarily for general-purpose computing workloads, neoclouds represent purpose-built environments optimized specifically for the unique demands of AI computation—particularly the intensive requirements of training and running large AI models.
This shift is not merely incremental but represents a profound discontinuity in architecting, delivering, and consuming computing infrastructure. To understand why neoclouds matter, we must first grasp the technical discontinuity they address and the structural advantages they potentially command in the rapidly evolving AI landscape.
The Technical Discontinuity: Understanding the Efficiency Gap
Traditional cloud infrastructure was not designed with the computational patterns of modern AI workloads in mind. While effective for conventional enterprise applications, general-purpose cloud environments often struggle with the specialized requirements of AI computation, particularly for training and inference with large models. This mismatch creates what industry participants call the "Efficiency Gap"—the substantial difference between theoretical GPU performance and actual realized throughput.
Model FLOPS utilization (MFU) provides the most telling metric for this gap. It measures the percentage of a GPU's theoretical maximum compute capacity that is utilized during AI workloads. Empirical evidence across hyperscaler environments demonstrates that utilization typically hovers between 35-45%, meaning more than half of the raw computing power being paid for remains effectively unused.
This inefficiency stems from several factors:
Network architecture not optimized for the unique data movement patterns of AI workloads
Storage systems designed for transactional rather than high-throughput sequential access
Virtualization layers add overhead to compute-intensive operations
Suboptimal GPU clustering and interconnect configurations
Resource scheduling systems not calibrated for the extended duration of training jobs
Neoclouds emerged specifically to address these inefficiencies, employing architectural approaches that can narrow the efficiency gap and deliver substantially more compute performance per dollar. By focusing exclusively on AI/ML workloads, these specialized providers can make fundamental design decisions that would be impractical for general-purpose cloud environments.
The Economic Proposition: Value Creation Through Specialization
The economic rationale for neoclouds rests on a simple but powerful principle: specialized infrastructure can deliver superior price performance for specific workloads compared to general-purpose alternatives. This proposition becomes particularly compelling for AI computation, where the cost structure is dominated by GPU expenses and efficiency improvements translate directly to economic advantage.
For enterprises and AI developers, neoclouds offer several distinct economic benefits:
Cost efficiency: By achieving higher MFU rates (often 60-70% versus 35-45% at hyperscalers), neoclouds can deliver the same computational outcome at a significantly lower cost.
Infrastructure flexibility: Many neoclouds offer "bare metal" GPU access, eliminating virtualization overhead and allowing for fine-grained optimization of specific AI workloads.
Specialist expertise: Teams focused exclusively on AI infrastructure challenges often develop superior operational practices and technical solutions compared to generalist cloud operations teams.
Accelerated deployment: Purpose-built environments typically enable faster setup and scaling of AI infrastructure compared to navigating the broader service catalogs of hyperscalers.
The magnitude of this economic advantage becomes increasingly significant as AI models grow in size and complexity. For large foundation model development, where training costs can reach tens of millions of dollars, even modest efficiency improvements translate to material financial impact. For inference workloads at scale, where marginal cost improvements compound across billions of API calls, the economic case becomes even more compelling.
Market Structure and Competitive Dynamics
The neocloud market has evolved rapidly over the past three years, developing a distinct structure with several categories of participants. As noted by Dylan Patel, Chief Analyst at SemiAnalysis, in his influential market segmentation analysis: "The AI infrastructure market is bifurcating into distinct provider categories with different optimization priorities and capital structures. This fragmentation reflects the reality that AI compute is not a monolithic market, but rather a spectrum of specialized needs that no single provider architecture can optimally address."
This segmentation has created several distinct categories:
Pure-play neoclouds: Companies like CoreWeave, Lambda Labs, and Crusoe focus exclusively on AI infrastructure, typically leveraging proprietary optimizations to maximize GPU efficiency.
Specialized divisions of hyperscalers: Services like AWS Trainium/Inferentia, Google TPU, and Azure's specialized ML offerings that attempt to capture the benefits of specialization within the broader cloud ecosystem.
Regional specialists: Providers like G42 (UAE) and Cohere For AI (Canada) combine specialized AI infrastructure with local data sovereignty advantages.
Aggregators and brokers: Platforms that provide unified access to multiple underlying AI infrastructure providers, often adding workflow management and optimization layers.
While pure-play neoclouds currently claim technical and efficiency advantages, hyperscalers are investing aggressively to narrow this gap. The competitive dynamics in this market are shaped by several factors:
Capital intensity: The GPU supply constraint makes access to capital for hardware acquisition a critical competitive factor.
Technical talent: Teams with specialized expertise in high-performance computing and AI infrastructure optimization remain scarce and highly sought-after.
Economies of scale: Larger GPU deployments enable more efficient operations and better amortization of fixed costs.
Power access: Securing abundant, reliable, and cost-effective power has become a critical competitive differentiator, particularly as AI infrastructure density increases.
Neoclouds as "Picks and Shovels" in the AI Gold Rush
Within our Disruptive Technology Matrix, neoclouds occupy a strategic position as the essential "picks and shovels" of the broader AI transition. Much like how equipment suppliers rather than prospectors often captured the most reliable returns during historical gold rushes, neocloud providers stand to benefit regardless of which specific AI models or applications ultimately prevail.
This positioning offers several investment advantages:
Exposure to secular growth: Neoclouds benefit from the fundamental expansion of AI computation requirements, a trend likely to continue regardless of which specific companies lead in model development.
Reduced winner-take-all risk: Unlike foundation model developers, where network effects may drive consolidation to a few winners, infrastructure layers can support multiple successful providers serving different market segments.
Recurring revenue characteristics: The continuous nature of AI training and inference workloads creates revenue patterns with attractive predictability compared to more volatile application-layer investments.
Option value on emerging models: As new model architectures emerge, infrastructure providers can adapt to support them, effectively gaining exposure to innovation without taking direct model development risk.
However, this positioning also comes with specific risks that warrant careful consideration. The capital intensity of the business model creates vulnerability to utilization drops, while rapid technological evolution in AI hardware could potentially strand investments in current-generation equipment. Additionally, competition from well-capitalized hyperscalers represents a persistent threat to standalone providers.
Looking Forward: Sustainable Advantage or Transitional Model?
The investment thesis for neoclouds ultimately hinges on whether their current advantages represent sustainable structural differentiation or merely a temporary opportunity created by hyperscalers' lag in adapting to AI workloads.
The bull case suggests that the technical complexities of AI infrastructure optimization create persistent barriers to entry, allowing specialized providers to maintain meaningful efficiency advantages. Additionally, the economics of specialization could enable neoclouds to invest more aggressively in AI-specific innovations compared to hyperscalers balancing investments across diverse workloads.
The bear case points to hyperscalers' vast resources, existing customer relationships, and ability to subsidize AI-specific infrastructure with profits from higher-margin services. This view suggests neoclouds may eventually face margin compression as larger players close the efficiency gap and leverage their scale advantages.
The likely reality lies between these extremes. The market appears large enough to support both specialized providers and hyperscaler offerings, with different customer segments valuing different aspects of the value proposition. Specialized providers with superior technical execution, capital discipline, and business model innovation will likely carve out sustainable positions, particularly as the AI infrastructure market continues its rapid expansion.
For institutional investors, neoclouds represent a structurally advantaged segment within the broader AI value chain—one with the potential for durable growth and value creation as artificial intelligence increasingly becomes embedded in enterprise operations and consumer experiences. As we examine specific companies within this category, including CoreWeave's upcoming public offering, this framework provides essential context for evaluating both company-specific execution and broader competitive positioning in this rapidly evolving landscape.
Part 2. CoreWeave S-1 Teardown
Durable Growth Moat Analysis
CoreWeave: A Neocloud at the Crossroads
We applied our pioneering Advanced Growth Intelligence (AGI) analytical framework to the prospectus of CoreWeave, one of the hottest startups in the generative AI space. This 21-page teardown examines CoreWeave’s business model in granular detail to test its Durable Growth Moat and is available exclusively to D’Ornano + Co. Insights clients.
CoreWeave represents a critical case study in the emerging neocloud category. This comprehensive analysis examines:
How the AI infrastructure efficiency gap creates structural economic opportunities.
The hidden vulnerabilities in CoreWeave’s capital structure relative to technological refresh cycles.
Why conventional cloud valuation frameworks misread the neocloud economic model.
Strategic positioning considerations as the first pure-play GenAI infrastructure provider approaches public markets.
This robust teardown combines market-level insights on the neocloud Discontinuity with a detailed evaluation of CoreWeave’s specific advantages and vulnerabilities—essential reading for institutional investors developing exposure strategies to the AI infrastructure layer.
As the first pure-play neocloud to approach the public markets, CoreWeave provides a unique window into the operational realities, financial dynamics, and strategic challenges of this emerging category.
For qualified institutional investors seeking to capitalize on technological Discontinuities, contact us directly to discuss how D’Ornano + Co. Insights can be tailored to your organization’s investment focus and allocation scale.
Raphaëlle D’Ornano