Palantir IPO'd at $19 in 2020. Dropped to $6 by 2022. Then climbed to a 640% total return over five years. Analysts spent a lot of that time confused about what Palantir actually sold. The answer was never really the software. It was the engineers who came with it.
That model has a name: the forward deployed engineer. And as of May 2026, both Anthropic and OpenAI are building billion-dollar ventures around it. If you want to understand why enterprise AI adoption is about to look very different over the next 18 months, this is the concept to understand.
What Is a Forward Deployed Engineer?
The standard software sales model works like this: a company builds a product, hands it to sales, sales pitches it to the client, and the client's IT team figures out how to get it working. A customer success manager might check in every quarter. That's roughly it.
Palantir flipped this. Instead of handing the product over and wishing the client luck, they embedded their own engineers — real ones, shipping real code — directly inside the client's organization. These engineers set up shop at the client's offices. They learn the client's internal systems, their data pipelines, their compliance constraints, their specific workflows. Then they build the thing that actually works for that specific company.
That's the forward deployed engineer. Not a consultant writing documentation. Not a sales engineer giving demos. A technical person doing real implementation work from inside the client's walls.
It works especially well for clients with complicated, high-stakes problems — hospitals, banks, government agencies, large financial institutions. Organizations where off-the-shelf software almost never fits, where the requirements are specific and weird, and where the cost of getting it wrong is high.
Why Normal Software Sales Fails for AI
With most SaaS products, the gap between "what the software does" and "what the client needs" is manageable. A project management tool, a CRM, a payroll system — these are well-understood categories. Clients have seen them before. Implementation follows a predictable script.
AI deployment doesn't work this way. There are two sides to making AI actually function inside a real business. The AI lab's engineers understand models and harnesses — the scaffolding around a model that gives it tools, memory, context, and the ability to take actions. The client's engineers understand the business: what data exists, what systems connect to what, where the compliance landmines are, what the actual bottleneck is.
Neither side alone can build what works. The AI engineer doesn't know enough about the client's business. The client's engineer doesn't know enough about building AI systems. You need both in the same room.
That knowledge gap is the actual reason enterprise AI adoption has moved slower than the technology's capabilities would suggest. Not because the models aren't good enough. Because the people who know how to deploy them are scarce, and the clients who need them most — hospitals, banks, large financial firms — are exactly the organizations where a wrong implementation causes the most damage.
The Palantir Proof: $6 to 640%
Palantir went public in September 2020 at $19 per share. By late 2022, the stock had dropped to around $6. Critics at the time had a consistent complaint: the revenue per client was enormous, but Palantir couldn't scale. Every new client required this expensive, slow, hands-on deployment process. How do you build a large software business when every sale involves embedding a team of engineers inside a Fortune 500 company for months?
The critics weren't wrong about the model. They were wrong about what it was worth. Over the five years following the IPO, Palantir delivered a roughly 640% return. The embedded deployment model turned out to be the product, not a workaround for a product that didn't exist yet.
The reason is stickiness. When an FDE team spends three months embedded inside Goldman Sachs building a custom AI system that touches their internal data infrastructure, Goldman Sachs is not switching vendors next year. The switching cost is not a subscription fee. It's rebuilding a system that's now woven into how the firm operates.
That's the business model. High cost to acquire, very high retention, very high contract value.
What Anthropic's $1.5 Billion Venture Actually Is
In May 2026, Anthropic announced a joint venture targeting enterprise AI deployment. The valuation at launch: $1.5 billion. Initial commitment: $300 million from Anthropic, Blackstone, and Helman & Freeman. Additional backing from Apollo Global Management, General Atlantic, GIC, Leonard Green, and Suko Capital.
The founding partners are not random. Blackstone manages roughly $1 trillion in assets and is one of the most influential alternative asset managers on earth. Goldman Sachs is Goldman Sachs. These are not companies that sign partnership agreements because a technology sounds promising. They sign because they've done the analysis and decided they want the infrastructure locked in early.
The structure of this venture is, at its core, a forward deployment machine. Anthropic brings the models and the technical knowledge. The financial partners bring access to the clients — the banks, the asset managers, the institutions that have the most to gain from AI deployment and the least tolerance for failed implementations.
The $1.5 billion figure is almost certainly not where this ends. Palantir's early contracts with similar institutions were the foundation of everything that followed.
OpenAI's Version — Same Playbook, Bigger Numbers
Simultaneously, OpenAI is building something called the Development Company along the same lines. The numbers are larger: $4 billion raise, 19 investors, $10 billion valuation. The scope appears broader, with manufacturing and healthcare as targets alongside finance.
Worth noting: reportedly, there is no investor overlap between the two ventures. Anthropic and OpenAI have, between them, partitioned the major institutional capital. Whether by design or coincidence, both labs are now attached to different corners of the institutional world.
The scale difference is meaningful. A $10 billion vehicle targeting manufacturing and healthcare is a different operation from a $1.5 billion vehicle targeting finance. OpenAI appears to be building for breadth. Anthropic's initial focus is narrower and deeper.
Which approach produces better results is unknowable today. The Palantir comparison cuts both ways here — Palantir's early government and intelligence contracts were narrow and deep. That worked. Whether OpenAI's broader approach achieves the same stickiness is a real question.
The Real Reason AI Deployment Has Been Slow
There was a period — roughly 2024 through early 2025 — when a certain class of analyst and journalist pointed to slow enterprise AI adoption as evidence that AI capabilities were overstated. If the technology is this powerful, why aren't enterprises deploying it everywhere?
The framing was wrong. The deployment gap was never about model capability. It was about implementation infrastructure. Building an AI system that actually works inside a real enterprise — with its legacy data systems, compliance requirements, internal politics, and specific workflows — requires skills that simply weren't widely available yet. The models were ready before the deployment layer was.
Think of it like a harness. Claude Code, Codex, OpenClaw — these are scaffolding structures around AI models. They give models the tools to act, to remember context, to interface with external systems. An early example: NVIDIA's Voyager project in 2023, which used GPT-4 inside a Minecraft harness to create a self-improving agent that kept learning new skills without plateauing. The model was capable. The harness made it functional in a specific environment.
Every enterprise deployment is the same problem. You need the right model. You need a harness that connects that model to the client's actual systems. And you need people who understand both. Forward deployed engineers are, functionally, harness builders who work on-site.
The deployment gap is closing because the supply of people who can do this is increasing. These Anthropic and OpenAI ventures will accelerate that — they're essentially creating organized pipelines for matching FDE talent to enterprise clients. For more on how AI scaffolding tools have evolved, see Claude Code and OpenClaw cost comparison (2026).
My Take
The interesting number here isn't the $1.5 billion or the $10 billion. It's the lack of investor overlap.
If the same institutions were backing both Anthropic and OpenAI's deployment ventures, you'd read it as hedging — big money placing bets on multiple horses because nobody knows who wins. The fact that they've apparently split cleanly suggests something different. The major institutional players have each made a call. They picked one.
That's a stronger signal than the headline valuation numbers. Valuations are negotiated. Exclusive partnerships — where Blackstone and Goldman are structurally attached to Anthropic and not OpenAI — those reflect actual conviction about which infrastructure gets embedded into which institutions.
The stickiness argument from Palantir applies here too. Whichever lab gets its systems running inside the major financial institutions first has a multi-year lead that's very hard to displace. The race isn't for the best model. It's for the deepest integration. That race started this month.
FAQ
What does a forward deployed engineer actually do day to day?
They work inside the client's offices, using the client's systems. Practically: mapping existing data infrastructure, identifying which workflows can be automated, building the harness that connects an AI model to those specific workflows, testing against real data, and iterating based on what breaks. It's engineering work, not advisory work. They ship code.
Is this model only viable for large enterprises?
At the scale Anthropic and OpenAI are operating, yes — the economics require large contract values. A hospital system or a major bank can justify the cost of an embedded technical team. A 50-person company cannot. Smaller businesses will likely continue using off-the-shelf AI tooling and self-serve APIs rather than access this model directly.
How is this different from regular IT consulting?
Traditional IT consultants typically come in with a pre-built methodology, apply it, and leave a system behind. FDEs come in with deep knowledge of a specific AI platform and build something that doesn't exist yet — custom to that client's environment. The consultant follows a playbook. The FDE writes one from scratch. The other meaningful difference: FDEs are ongoing. The engagement doesn't end at go-live.
Will Anthropic's venture actually accelerate enterprise AI adoption?
In the financial sector, almost certainly. Blackstone and Goldman Sachs have direct access to hundreds of portfolio companies and client firms. An FDE team that successfully deploys an AI system inside one Blackstone portfolio company becomes a template deployable across others — the learning compounds. The question is execution speed, not whether the model works. Palantir already proved it works.
The Palantir model took about five years from IPO to proof. Anthropic and OpenAI are starting with more capital, better models, and a blueprint that already exists. Whether five years compresses to two is the question worth watching. For context on how AI capabilities have been developing alongside this deployment infrastructure, that piece is worth reading alongside this one.
0 Comments