Decoding Discontinuity

Decoding Discontinuity

GenAI Foundation Models: The LLM Race Has Only Just Begun

Raphaëlle d'Ornano's avatar
Raphaëlle d'Ornano
Oct 01, 2024
∙ Paid
Credit: Ben Wicks

The generative AI boom continues and shows no signs of slowing down. According to IDC, enterprise spending on generative AI will surge from $16 billion in 2023 to $143 billion by 2027.

So far, the most significant investments have been in the companies building the foundation models that enable the new technology. The sums have gotten so large and the usage so big that it seems on the surface that the LLM market is mature and dominated by just a handful of names: OpenAI, Anthropic, Meta, Google, Mistral, and Cohere. But not later than today Nvidia released a family of powerful open-source models able to compete with the leaders.

One year after writing about the market for GenAI foundation models, I wanted to revisit the topic and understand what has changed. Note that I am excluding from my analysis Nvidia’s new NVLM 1.0 family of large multimodal language models just released.

My key takeaways: Winners are emerging in the foundation model race, though innovation continu…

Keep reading with a 7-day free trial

Subscribe to Decoding Discontinuity to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Raphaëlle d'Ornano
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture