Most AI Startups Are Built to Die
As AI makes software more malleable, value is shifting to the orchestration layer. This post explores why open-source products are best suited for this paradigm, and how they need to be designed to build a competitive edge.
01 | “Walled gardens” are a relic.
Something’s shifting in how people are talking about AI. Early excitement seems to be giving way to tacit skepticism – especially from users implementing these tools in their day-to-day workflows, and developers who understand what’s going on under the hood.
This skepticism stems from a fundamental problem: today, most AI tools are only creating the illusion of productivity.
AI generates content quickly, completing tasks in seconds. But the quality is inconsistent. Without strong fine-tuning, robust memory architecture, and a clear scope around what the system should and should not own, the output can get mediocre. AI tools introduce subtle errors – inaccurate references, hallucinations, polished UIs hiding fragile back-ends – that humans have to fix. This leaves users underwhelmed and less efficient.
That gap – between how effective AI feels at first and how it actually performs over time – is a material risk, and one I don’t see many investors pricing in. Given where the technology stands today, we may have overestimated the staying power of many AI products, especially those that promise to automate entire workflows or think on our behalf.
A recent MIT study found that overreliance on tools like ChatGPT can impair memory and reduce our capacity for independent thinking. That’s not just an ethical concern, it’s a product risk. When tools take too much control, buyers stop paying attention. They check out faster and are more likely to churn when the results don’t hold up.
So why are so many of these tools falling short?
In part, because of how they’re designed.
Most AI products today are built on closed, proprietary architecture. These “walled garden” products resemble traditional SaaS and consumer apps – only now with autocomplete.
Closed applications predefine the logic, features, and data flows. End users (or agents) can tweak inputs, but they can’t rewire the underlying source code. Interaction is limited to a UI or managed endpoints that abstract away the nuts and bolts of how the system actually works.
That abstraction made sense in the SaaS era, when building software required a large team of expensive developers, and when users favored simplicity over control.
But today, code is cheap to produce and software can be written with natural language. This means users and agents can now inspect, rewire, and extend product capability themselves, a trend that will only accelerate over the next 2-3 years.
In this new paradigm of flexible, participatory software, abstraction from source code creates unnecessary rigidity in the product experience.
And that rigidity is a competitive liability.
AI startups building on closed foundations are vulnerable. It’s easy for incumbents – who are also built on proprietary stacks, but with broad distribution and deep pockets – to integrate AI into existing workflows and feature-kill startups overnight.
And the usual startup defense – amassing a data moat as the company scales – is no longer a reliable strategy. With the right training methods, models do not need a large dataset to generate tailored output.
As a result, many of today’s AI startups, even those with fast initial growth to $10M or $20M ARR, may struggle to sustain themselves over time.
02 | Orchestration is the new moat.
So where can startups build a competitive advantage?
Increasingly, at the orchestration layer. This is the part of the stack that governs how software behaves — how inputs are routed, components are selected and sequenced, workflows are modified, and network incentives are managed.
Startups can win by building products that users can customize and shape themselves.
This follows a familiar pattern in software: when a component of the software stack suddenly becomes cheap and widely accessible, value shifts to controlling how that component is then manipulated and used.
In the cloud era, data storage became the commodity. Value moved to building large, unique data sets, and building systems that could manipulate that data in ways others couldn’t.
Today, AI is making code the commodity. Anyone can generate features, and data matters far less. So the value is now in product customization and flexibility – how users can build and manipulate the code to meet their objectives.
In a world where orchestration is the moat, open-source companies have a structural advantage over their "walled garden" counterparts.
Here’s why:
1. Open-source products are directly programmable.
By exposing core components — like the code, model weights, or data — open-source systems let developers directly modify how the product works. This enables far greater flexibility and customization than closed systems, which only allow for configuration within predefined boundaries set by the vendor.
2. Contributor networks will amplify open-source advantages.
Open-source ecosystems benefit from global contributors who extend functionality, add integrations, and patch issues in real time. This speeds up development and keeps the product aligned with the latest tools and use cases — a key advantage when orchestration and flexibility matter more than ever.
3. Incumbents must self-cannibalize in order to compete.
Open-source startups win by offering free, flexible alternatives to expensive, closed products, then monetizing through add-ons. Incumbents can’t follow this open-source playbook without exposing their code or allowing self-hosting, moves that would erode their margins and undermine their business model.
4. Better alignment between usage and revenue.
As orchestration becomes the core source of value, companies need pricing models that scale with usage and integration. Traditional software is sold as a fixed-cost product – priced per seat or per tier – which breaks down when value comes from ongoing customization and system-level control. Open-source flips this. It moves both costs and revenue to a variable structure. Teams start with a free, self-hosted product, pay only for the infrastructure required to run it, and adopt paid add-ons like hosting, tooling, or support as the product becomes more deeply integrated into their workflow. This creates a more flexible cost structure and better aligns revenue with value creation for the customer.
03 | How to make open-source work.
Open-source software is well-suited for a world where orchestration is a key differentiator. But without thoughtful design, these systems can become brittle and hard to manage.
Winning products strike a careful balance… flexible enough to customize, stable enough to scale.
The best open-source products start with strong default configurations that simplify setup and deployment. They need to work out of the box for teams that never customize a thing.
At the same time, these products need to offer clear extension points that guide users on where and how to modify the product without breaking its core logic.
They must also support portability, giving users control over their data, the freedom to deploy across environments, and easy ways to connect into external systems.
In open-source, every new contribution can introduce bugs or security risks. These systems need strong quality controls. CI/CD pipelines must go beyond syntax checks to ensure code changes behave as expected and come from trusted sources. AI can help automate this, but only if the vendors define what “safe” and “correct” mean, and bake that into the system design.
In open-source, the core product is usually free, and companies make money by selling add-ons. As value shifts to orchestration, the most valuable add-ons will be the ones that help teams run, manage, and extend the system more effectively. That includes:
Agentic companions. AI agents that automate tasks, trigger workflows, and coordinate across components.
Monitoring and alerting. Dashboards to trace system behavior, with automated alerts to detect and debug failures.
Enterprise integrations. Prebuilt connectors that reduce integration risk and speed up deployment.
Network access and transaction fees. Connecting to live systems — like payment networks or third-party services — and taking a cut of financial or data transactions.
Collaboration and governance. Shared templates, usage analytics, and role-based access controls for multi-user teams.
Finally, great open systems are observable. They help users understand system behavior, debug issues, and ensure modified deployments are running predictably.
The best AI products going forward will be the ones users can shape directly.
You can’t just wrap a black-box model in a closed app and expect to beat billion-dollar incumbents. They can do that too, and distribute it faster.
What they can’t do is open up their systems, at least not without cannibalizing their existing business.
That’s where open-source has an edge. These products put users in the driver’s seat, giving them the power to inspect, remix, and extend the system to fit their needs. This level of participation builds trust, drives retention, and keeps users engaged.
It’s a hard product challenge. But it’s the best path to defensibility. And frankly, more exciting category to invest in, and help founders – and now users – build together.