The Layer: Why the "LLMs Are A Dead-End" Consensus Is Premature
As insiders warn scaling is hitting limits, breakthroughs in verified AI systems and new multi-agent coordination suggest the generative paradigm isn’t collapsing, but evolving through a new “layer.”

The consensus that LLMs are a dead end toward AGI is hardening fast. Former Meta Chief AI Scientist Yann LeCun just raised a $1B seed round to replace them, and even OpenAI CEO Sam Altman now admits scaling won’t reach AGI. But two very recent developments suggest the obituary is premature. Axiom Math is producing verified proofs of unsolved mathematical conjectures by co-designing AI with formal verification. It just raised a $200m Series A at a $1.6 billion valuation. A new paper shows that the vision pathway of frozen vision language models (VLMs) can be repurposed as a communication channel between heterogeneous agents, delivering faster and often more accurate coordination than text. Neither result was supposed to be possible with the generative paradigm. Both were achieved by engineering a “layer” above it. The architecture may have a ceiling. But we are nowhere near it yet.
Keep reading with a 7-day free trial
Subscribe to Decoding Discontinuity to keep reading this post and get 7 days of free access to the full post archives.

