Decoding Discontinuity

Decoding Discontinuity

The Layer: Why the "LLMs Are A Dead-End" Consensus Is Premature

As insiders warn scaling is hitting limits, breakthroughs in verified AI systems and new multi-agent coordination suggest the generative paradigm isn’t collapsing, but evolving through a new “layer.”

Raphaëlle d'Ornano's avatar
Raphaëlle d'Ornano
Mar 17, 2026
∙ Paid
Photo by Linus Belanger via Unsplash

The consensus that LLMs are a dead end toward AGI is hardening fast. Former Meta Chief AI Scientist Yann LeCun just raised a $1B seed round to replace them, and even OpenAI CEO Sam Altman now admits scaling won’t reach AGI. But two very recent developments suggest the obituary is premature. Axiom Math is producing verified proofs of unsolved mathematical conjectures by co-designing AI with formal verification. It just raised a $200m Series A at a $1.6 billion valuation. A new paper shows that the vision pathway of frozen vision language models (VLMs) can be repurposed as a communication channel between heterogeneous agents, delivering faster and often more accurate coordination than text. Neither result was supposed to be possible with the generative paradigm. Both were achieved by engineering a “layer” above it. The architecture may have a ceiling. But we are nowhere near it yet.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Raphaëlle d'Ornano · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture