Part III — Figma's Orchestration Play: How MCP Network Effects Rewrite Software Defensibility
Act III of the SaaS Reckoning: The tools that can plug into the loop won't just survive. They'll compound.
This is the third and final installment in a series examining the structural repricing of enterprise software. Part I argued the $285 billion “SaaSpocalypse” was mispriced panic when it came to systems of records that could mutate into orchestrators of agentic workflows. Part II decoded Anthropic’s $380 billion valuation through the lens of orchestration economics. This final piece examines who benefits from orchestration and why the answer is not what the market expects.
Figma’s stock has plummeted ~70% from its $70B IPO peak, but it’s now undervalued as it evolves from a design tool to an essential orchestration node in agentic AI workflows. Via MCP integration and the bidirectional “Code to Canvas” feature, Figma captures semantic density and creates compounding agent-to-tool-to-agent network effects, rewriting software defensibility from seat-based metrics to flow control - separating thriving “throughput nodes” (like Figma) from commoditized tools, while systems of record evolve differently through becoming systems of action anchored on their data gravity.
The SaaS panic selling has not stopped.
On Friday, Anthropic launched Claude Code Security, a research preview that scans codebases for vulnerabilities and proposes fixes. CrowdStrike fell 6.8%. Okta dropped over 9%. SailPoint shed 9%. The Global X Cybersecurity ETF closed at its lowest level since November 2023. In a matter of hours, billions more in market capitalization evaporated from a sector investors had, until that morning, considered relatively insulated from the agentic disruption.
Three weeks into the software reckoning that began with the Claude Cowork plugins, the pattern now ticks forward with clockwork precision: Anthropic ships a feature. It’s modest in scope, but enormous in implication. Entire sectors reprice overnight.
But as I have noted in recent weeks, these moments of extreme selling are merely emblematic of the broader souring that has been quietly eroding confidence in the SaaS sector for months now.
Perhaps no SaaS stock has been battered over the past six months as badly as Figma, the one-time SaaS design darling whose IPO last July carried such high hopes for a revival of public offerings. There was a mythical quality to Figma’s tale, having its $20B acquisition by Adobe scuttled by regulators, only to re-emerge from the wreckage, appearing even stronger with its generative AI portfolio and watching its stock soar to $142 per share and a $70B valuation during the first few weeks of trading.
I was a non-believer.
Just as Figma was about to go public, I published a lengthy teardown of its prospectus for clients using our proprietary Durable Growth Moat methodology, concluding:
“Figma is a beautifully engineered SaaS 2.0 company being valued for a future it has not yet earned.”
The soaring stock ran counter to my conclusion. So I stress tested my own analysis just 10 days later.
Flash forward, and the mood has changed. With a vengeance. Figma’s stock is down more than 70% as of February 23rd.
Its market cap hovers around $12B, making it possibly a tempting takeover target. All of this amid the larger SaaS bloodletting.
Which can only mean one thing: It’s time for me to explain why Figma may now be dramatically undervalued.
This is not being counterintuitive for the sake of being counterintuitive. Rather, as I did after my initial assessment, new developments should always warrant re-evaluation. In this case, I’ll point back to my own words above: “not yet earned.”
My original skepticism toward Figma was rooted in valuation discipline and architectural ambiguity: a great SaaS company priced as a future orchestrator without proof it could occupy that role.
What has changed is not the violence of the selloff, but the company’s positioning inside the emerging agent workflow.
More specifically, its recent launch of “Code to Canvas” and the underlying MCP integration signal something materially different from incremental AI feature releases. These moves represent a shift from being a destination tool that designers open to becoming a coordination surface that agents must traverse. The outlook on Figma is changing not because sentiment has, but because its architectural posture has.
The deeper lesson extends beyond a single stock. In the agentic era, tool defensibility migrates from features to flow control, from owning a category to owning a node in the orchestration graph. The companies that compound will not be those with the highest retention under yesterday’s seat-based metrics, but those whose context is semantically dense enough that agents lose fidelity if they route around it. MCP makes this visible. Open protocols flatten distribution advantages while amplifying architectural ones.
This article argues that Figma’s orchestration positioning is an early proof point of that broader transition. As I have outlined in Parts I and II of this mini-series, the SaaSpocalypse should not be a blanket indictment of software. It should be a sorting mechanism. I had applied the analysis to systems of records. I am now applying it to tools.
What follows is an examination of how MCP-based interconnection rewrites software defensibility, why the market’s sector-based panic is missing the architectural shift underway, and how to distinguish between tools agents can call around and nodes they must call through.
SaasPocalypse revisited
The S&P 500 Software & Services Index is down roughly 20% year to date. Hedge funds have shorted more than $24 billion in software stocks. Traders at Jefferies coined a term for it.
The “SaaSpocalypse” has entered the financial lexicon.
Yet, for all this valuation violence, market commentary remains stuck in the gravity pull of a collapsing paradigm from which it cannot seem to escape.
The dominant narrative runs something like this:
AI labs are ascending, software companies are declining, and the only question is how quickly their trajectories inevitably cross along the valuation graph. More nuanced versions invoke last quarter’s growth rates, point to intact margins, and conclude that “fundamentals are fine.”
This is the language of the SaaS 2.0 playbook. Revenue persistence, net dollar retention, Rule of 40. They are desperately being applied to a world in which the very concept of what constitutes durable software value is being rewritten.
Gaston Bachelard would have recognized the pattern immediately. The French philosopher of science observed that the greatest obstacle to new knowledge is not ignorance but existing knowledge, mental models inherited from the previous paradigm that feel correct precisely because they once were. When analysts point to strong retention metrics and healthy margins as evidence that disruption is overblown, they are not wrong about the numbers. They are wrong about what the numbers measure.
Retention in a world of human-operated interfaces tells you nothing about defensibility in a world of agent-orchestrated workflows. Margins built on seat-based pricing are irrelevant when the unit of value shifts from per-user subscriptions to per-outcome orchestration fees. The backwards-looking data is accurate. The forward-looking inference is the obstacle.
Here is the irony.
Many of the software security stocks that plunged last Friday - CrowdStrike, Okta, SailPoint, Palo Alto - were prominent constituents of the very “safe haven” baskets that leading banks had assembled for clients seeking resilient tech exposure. These were sector-based selections, not architectural ones. They reflected the old grammar of portfolio construction: pick the best companies within defensible verticals, weight toward category leaders, and trust the moat. Not a single one of those baskets, to my knowledge, was stress-tested against the question of where a company sits in the emerging orchestration graph, whether its context compounds or commoditizes, and whether its workflow position makes it a node that agents must call through or a tool that agents can call around.
The result was predictable, at least for those using the right lens. The “safe” stocks were precisely the ones most exposed, because sector preference had been mistaken for architectural resilience.
Against this backdrop of misplaced panic and misunderstood defensibility, something genuinely significant happened last Monday.
Code to Canvas: Figma Steps Into the Loop
On February 17, Figma and Anthropic launched a feature called “Code to Canvas.” The announcement was modest. The implications are not.
Code to Canvas allows developers working in Claude Code to take a functioning, rendered user interface that runs on localhost, staging, or production and push it directly into Figma’s design canvas as fully editable layers. Not a screenshot. Not a flattened export. Actual Figma objects: movable, annotatable, duplicable, and refinable. Type “Send this to Figma” in the terminal, and the bridge opens.
This completes a circuit that Figma has been wiring in plain sight.
In June 2025, the company launched its Dev Mode MCP Server in beta, allowing AI coding agents to pull design context such as components, variables, styles, and design tokens directly from Figma files into their code generation workflows. That was canvas-to-code: structured semantic context flowing outward to agents. Code to Canvas closes the loop.
Now it runs both ways.
Code generated by an AI agent flows back into Figma, where designers, product managers, and engineers can lay out alternatives side by side, annotate what works, and refine direction. It does all of this before sending the revised designs back into code through the same MCP connection.
Figma CEO Dylan Field framed the logic like this: “Code is powerful for converging - running a build, clicking a path, and arriving at one state at a time. The canvas is powerful for diverging - laying out the full experience, seeing the branches, and shaping direction collectively.”
What matters most is the protocol underneath
The integration runs on Figma’s MCP Server. This is the same Model Context Protocol (MCP) that Anthropic pioneered internally, open-sourced in November 2024, and donated to the Linux Foundation’s Agentic AI Foundation in December 2025. MCP now counts over 97 million monthly SDK downloads and more than 10,000 active servers. It is integrated into Claude, ChatGPT, Gemini, Microsoft Copilot, Cursor, and Visual Studio Code. By any honest measure, it has become the industry standard for connecting AI systems to external data, tools, and applications. It is the “USB-C of AI,” as more than one observer has put it.
By building on MCP rather than a proprietary API, Figma has done something most incumbents under pressure have not. It has positioned itself as a node in the emerging orchestration graph. Not a siloed tool at risk of bypass, but a surface through which agents must pass to accomplish design-to-code and code-to-design workflows. The distinction matters. Tools can be replaced. Nodes in a coordination graph become harder to remove the more agents route through them.

The quarterly data, reported the same day, suggests this is already happening.
Weekly active users of Figma Make grew over 70% quarter-over-quarter. More than half of enterprise customers with ARR above $100,000 were building in Make weekly. And the most revealing number: nearly 60% of all Figma Make files created in 2025 came from non-designers. The platform is expanding beyond its original user base because the agentic workflow demands a collaborative surface for evaluating AI-generated outputs, not just for designers, but for anyone shaping product decisions.
GitHub’s endorsement during the earnings call underlined the point.
The company uses Figma’s MCP Server and Code Connect to surface production design-system code directly in Figma, managing over 7,400 design tokens and tens of thousands of lines of code in a workflow that depends on tight coordination between design and engineering. When the world’s largest code hosting platform calls your MCP integration “a game changer,” the signal is directional.
From Overpriced Promise to Architectural Conviction
To understand why Figma’s move matters and what it reveals about where software value is migrating requires stepping outside the SaaS 2.0 analytical frame entirely. It also requires recalling how this company was priced when it came to market.
In July 2025, when Figma filed its S-1, I published a teardown through our Durable Growth Moat methodology. The headline finding was blunt: Figma was “a SaaS 2.0 artifact priced as if it already controls the AI future.”
On the financial metrics that the SaaS 2.0 playbook rewards, the company was unquestionably best-in-class: 46% year-over-year growth, 132% net dollar retention, 91% gross margins, $1.5 billion in cash with zero debt, and $555,000 in revenue per employee. An elite business by any traditional measure.
Beneath the headline numbers, however, our analysis identified structural concerns that warranted price discipline. The 132% net dollar retention and 96% gross retention were calculated on just the top 2.5% of Figma’s paying customer base - those with more than $10,000 in annual contract value. This methodology, common among product-led growth companies, omitted the vast majority of 450,000 customers, likely masking higher churn in the long tail. The 91% gross margin likely excluded the high cost of serving millions of free users, understating the true cost structure by several percentage points. These were curated numbers. They were not dishonest, but carefully framed to make the growth narrative more compelling than a full-base view would support.
More critically, we observed a striking disconnect between AI narrative and AI financial reality.
Despite over 150 mentions of AI in the S-1 and 40% of incremental R&D directed toward AI capabilities, the filing demonstrated no quantifiable revenue or cost savings from AI initiatives. Figma was a laggard relative to peers like Adobe, which had already begun monetizing AI. And what concerned us most: the S-1 contained no strategic discussion of Figma’s role within an agentic world where autonomous agents could abstract away the collaborative workflows that constituted the company’s primary competitive moat.
However, we saw early signals of orchestration ambition. That included the Dev Mode MCP integration in beta, Figma Make’s goal-based UX, and the intent abstraction surfaces of Slides and Sites. But we concluded that Figma “currently lacks the core infrastructure of a true orchestrator, such as dedicated agent SDKs, persistent memory systems for agents, and dynamic task-routing capabilities.”
The potential was real. The proof was absent.
Our valuation discipline followed directly. Financial strength warranted a premium multiple. Unproven architectural positioning demanded a discount. We recommended $15 billion to $17 billion, a valuation anchored to elite SaaS peers like ServiceNow and Datadog, trading at 12 to 15 times forward revenue, and calibrated by the material threat of agentic AI commoditizing Figma’s core workflow. Our bull case extended to 18 times forward revenue, a multiple that, we wrote explicitly, “requires evidence of successful agentic orchestration.”
As I noted above, the market initially disregarded that discipline entirely. Figma sought roughly $20 billion at its IPO. It surged 250% on its first day of trading, settling near $40 billion. That was approximately 35 times forward revenue, pricing in an orchestration future the company had not yet earned.
Where architecture meets value
Our concern at IPO was never that Figma lacked potential. It was that the market was paying orchestrator multiples for a company that, however exceptional, remained a tool. The gap between promise and proof was being priced at zero. Seven months later, the calculus has shifted in two directions simultaneously.
On the negative side, the concerns we flagged about selective metrics and unproven AI monetization remain relevant. The agentic disruption risk we identified is now violently manifest across the entire software sector.
On the positive side, Code to Canvas represents precisely the kind of evidence our bull case demanded. This is what should interest investors capable of looking beyond the panic.
The bidirectional MCP bridge demonstrates something our framework specifically measures: the ability to move from tool to node, from passive participant in agent workflows to active participant in the orchestration loop. When a developer types “Send this to Figma” inside Claude Code, Figma ceases to be merely a repository for design artifacts. It becomes the surface where teams decide which version of an AI-generated interface to pursue, explore divergent possibilities, and converge on, and where product direction is shaped. That is the beginning of intent capture. The journey toward transformation is not complete, not yet commanding, but structurally different from where the company stood at IPO.
The quarterly data reinforces the shift. The platform is expanding beyond its original user base because agentic workflows demand a collaborative surface for evaluating AI-generated outputs, and that surface is becoming integral to how product decisions are made.
What was overpriced for a promise at $40 billion may well be underpriced at $11B for an emerging architectural conviction at current levels.
The selective metrics we flagged at IPO still counsel caution. But the orchestration evidence that was absent from the S-1 is now appearing in the product, in the quarterly numbers, and - most importantly - in the architectural decisions the company is making about where to position itself in the agent workflow.
The New Network Effects: MCP as Architectural Moat
The deeper insight here is not about Figma specifically. It is about what MCP-based interconnection creates: a new species of network effect that the market has not yet learned to price.
In the SaaS 2.0 paradigm, network effects accrue through user density. More designers on Figma meant more shared files, more components, and more reasons for the next designer to join. These effects were real but bounded. They scaled within the design community and slowed as penetration matured.
MCP creates a fundamentally different dynamic. Every tool that exposes its context through the protocol becomes more valuable to every agent that can consume that context. Every agent that routes through MCP-connected tools creates a reason for more tools to join the graph. The network effect is not user-to-user. It is agent-to-tool-to-agent. Figma’s design context becomes more valuable because Claude Code can consume it. Claude Code’s outputs become more valuable because Figma can receive them. GitHub’s endorsement of the loop makes both ends stickier.
Each new connection does not merely add a node. It increases the routing density of the entire graph.
This is the structural logic that will separate companies that thrive in the orchestration era from those that do not. But it is important to be precise about which tools possess the characteristics that make this logic work, and which ones flatter themselves that they do.
Figma’s MCP play succeeds because of a quality that not every horizontal tool shares: ubiquity across the creative production chain.
Design touches everything: product interfaces, marketing assets, presentations, websites, and developer handoffs. Figma’s canvas is a natural waypoint because the artifacts it produces are inputs to nearly every downstream workflow in product development, and those workflows are now increasingly agent-mediated.
When 76% of Figma’s customers already use at least two of its products, and nearly 60% of Figma Make files are created by non-designers, the company has crossed from a design tool to a cross-functional collaboration surface. That breadth is what creates routing density. Enough surfaces are touching enough workflows that agents in the orchestration graph encounter Figma as a natural node rather than an optional detour.
Not every horizontal SaaS company has this.
The distinction is between throughput nodes, where work must pass to maintain fidelity, and convenience layers, where the routing function can be absorbed. Figma is becoming the former because its visual design context (including components, tokens, layout intent, and interaction patterns) is semantically rich in a way that agents cannot replicate without passing through the artifact.
A task list is semantically thin. A design system is semantically dense. Density is what creates stickiness.
This framework also cautions against dismissing vertical specialists categorically. In regulated sectors such as finance, healthcare, legal services, proprietary data, and compliance requirements create a different kind of gravity, one that agents cannot route around without losing regulatory fidelity. A Bloomberg terminal or an Epic health records system is not defensible merely because of data volume. It is defensible because the data includes institutional context such as regulatory interpretations, clinical decision histories, and compliance audit trails. These are the assets that agents require to operate accurately within those domains. If those vertical systems evolve from static records into active orchestration surfaces that direct agents, they will possess a form of defensibility that complements, rather than competes with, the horizontal MCP logic.
The point is not that horizontal tools are universally superior to vertical specialists. It is that the market’s current framework entirely misses the architectural dimension. At the moment, the market focuses on sorting companies by sector, then applying a blanket discount or premium. But what really matters is whether a company’s data and workflow position creates semantic density that agents must route through, whether that density compounds as agent usage increases, and whether the company is building the bidirectional bridges that keep it in the loop.
Some horizontals qualify. Some verticals qualify. The sector label is noise.
There is a necessary caveat here. The same openness that prevents Figma from being locked into a single model provider also lowers the barrier for competitors to become MCP nodes themselves. If every design tool builds an MCP server, the architectural moat narrows, and the contest reverts to a features war on more familiar ground.
The counter-argument is that the node value in an orchestration graph is not binary. It compounds with routing density, with the depth of semantic context exposed, and with the breadth of workflows that pass through the surface. A rival that ships an MCP server tomorrow does not instantly replicate the design system context, the cross-functional adoption patterns, or the agent workflow integrations that Figma has spent months building. First-mover advantage in protocol adoption is real, even if it is not permanent. The moat is wider than “we have MCP,” but thinner than “we are irreplaceable.” Investors should price accordingly.
The Light and the Distinction
This is the light. But precision matters in describing what it illuminates.
In Parts I and II of this series, I argued that systems of record such as CRMs, ERPs, and core financial platforms face a different structural challenge. Their defensibility derives from being the authoritative source of truth for enterprise operations, and their path to the orchestration era runs through becoming “systems of action” that direct agents rather than merely storing data for agents to query. That transition is real, but fundamentally different from what Figma and the broader tool layer are demonstrating.
Systems of record have gravity. Enterprises have built decades of processes around them. Ripping them out is prohibitively expensive. Their challenge is evolution: ascending from record to action, from storage to coordination, without forfeiting the gravitational advantage of their installed base.
Tools - horizontal or vertical -have fluidity. They touch workflows but anchor fewer processes. Their challenge is connection: integrating deeply enough into the orchestration loop to become indispensable nodes, before agents learn to route around them.
Figma is demonstrating that tools can overcome the challenge of connection - provided they possess the semantic density and cross-functional reach to become indispensable throughput nodes. Design context is rich, structured, and necessary for downstream fidelity. That is what makes the bidirectional MCP bridge work: the canvas where AI outputs become human decisions, the context surface that ensures agent-generated code aligns with design intent, and the collaboration layer where product direction emerges from collective intelligence rather than individual prompting.
This argument ultimately rests on an assumption worth stating plainly: that humans will continue to want a visual surface on which to evaluate, compare, and direct AI-generated outputs.
If agents eventually become capable of handling divergence and convergence entirely autonomously by generating, evaluating, and selecting among design alternatives without human review, then the canvas becomes vestigial.
That is not today’s reality.
Current agent capabilities excel at generating plausible outputs but remain poor at judging which output best serves a product’s strategic intent, brand coherence, or user experience goals. For now, the collaborative evaluation surface is where human judgment enters the agentic loop, and that surface is growing more integral, not less, as agent output volume increases. But the long arc of AI capability improvement means this assumption should be revisited rather than treated as permanent.
MCP is the mechanism that makes this durable. By building on an open standard, Figma avoids the trap of dependence on a proprietary API that a single model provider could deprecate or restrict. The open standard means the integration works regardless of which model sits at the orchestration layer. Today, that is Claude Code. Tomorrow it could be Codex, Gemini, or any MCP-compatible agent. The node persists even as the orchestrator changes. If MCP were to fragment, or a proprietary protocol were to prevail (unlikely, given the breadth of the foundation’s membership and the Linux Foundation’s governance track record), this thesis would weaken. But the trajectory favors openness, and the companies following it are standing on solid architectural ground.
This is a new form of defensibility that the market’s current analytical toolkit of retention rates, margin profiles, growth trajectories, and sector baskets simply cannot detect. Identifying it requires a framework that measures architectural positioning alongside financial performance: where a company sits in the emerging orchestration graph, whether its context compounds or commoditizes as agent density grows, and whether its workflow position makes it a node that agents must call through or a tool they can call around.
Conclusion: The Sorting Has Begun
The SaaSpocalypse is real, but its casualties are not who the market thinks they are.
Workflow wrappers and thin AI applications, the venture-funded wave of “model + interface + subscription” companies built between 2023 and 2025, face existential compression as model providers absorb their value propositions through plugins and features. That thesis has been repeatedly confirmed over the past three weeks.
But tools that integrate successfully into the orchestration loop by adopting MCP, building bidirectional bridges, and possessing the semantic density that makes them throughput nodes rather than convenience layers are constructing a form of defensibility that did not exist in the SaaS 2.0 era. They are not surviving despite the agentic transition. They are potentially thriving because of it, accessing network effects that scale with agent density rather than user density, compounding as the orchestration graph grows.
Figma’s Code to Canvas is the first clean proof point. Not because Figma has completed its transformation - it has not - but because it demonstrates the architectural pattern that will separate winners from victims. Plug into the loop. Become a surface where agents generate value, and humans make decisions. Make your context available via open standards so that every orchestrator, regardless of provider, can route through you. And ensure that the context you provide is semantically rich enough that routing around it would mean losing fidelity.
The companies that grasp this - horizontal or vertical, incumbent or insurgent - and that move with urgency, will find that the market’s indiscriminate selling has created a mispricing of historic proportions. The companies that do not, that continue to build features rather than connections, that treat AI as a product enhancement rather than an architectural imperative, will discover that last quarter’s margins offered no protection at all.
The SaaS 2.0 playbook told you to build the best tool and defend your vertical. The orchestration era tells you to become the node that agents cannot route around.
Figma just showed how.
This article is the third and final installment in the SaaS Reckoning series.
Part I: The $285 Billion SaaSpocalypse Is the Wrong Panic
Part II: Decoding Anthropic’s $380 Billion Valuation: Orchestration over Raw Intelligence
Disclaimer: This post reflects the author’s opinions and analysis based on publicly available information. It is not investment advice or a recommendation to buy, sell, or hold securities. Readers should conduct independent research and consult financial advisors before making investment decisions.




