There’s a lot of excitement around MCP (Model Context Protocol) right now—especially after OpenAI announced it would adopt Anthropic’s open-source standard last week. And for good reason.
It’s now becoming trivial to connect your AI tool to systems like Notion, Spotify, or internal APIs—without needing to duct tape together brittle integrations.
For anyone who’s spent the last year hacking together agents, this is a big deal.
What MCP Actually Does
If you’ve built anything with LLMs recently—assistants, copilots, workflow tools—you’ve probably wrestled with “tool use.” It went by different names: function-calling, API orchestration, agentic workflows. But the pain was the same.
Maybe you fed API docs or an OpenAPI spec to a model and hoped it would “figure it out.” Maybe you hardcoded the integration using Langchain that broke the moment an endpoint changed or you ran into a breaking version update. Maybe you tried to hand-roll a logic layer that barely held together.
MCP fixes that. It introduces a standardized protocol between models and tools—APIs, databases, and more—that packages what an LLM needs into a clean, machine-readable format.
You can think of it like OAuth for LLM-native apps: extensible, ergonomic, and increasingly interoperable.
In Claude for Desktop, for example, enabling MCP just means pointing to a config like this:
{ "mcpServers": { "spotify": { "command": "uv", "args": [ "--directory", "/Development/music/spotify-mcp", "run", "spotify-mcp" ] }, "memory": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-memory" ] }, "puppeteer": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-puppeteer" ] } } }
Yes, MCP is essentially an API wrapper. But it’s the good kind of wrapper: a useful abstraction that lets developers focus on building value-added workflows—not endlessly patching brittle integrations. I wouldn’t be surprised if companies like Slack, Notion, and Airtable end up hosting their own MCP servers, making model interaction even more seamless and robust.
And like any good standard, once it gains traction, it becomes invisible infrastructure.
In short, MCP adoption will allow your AI to instantly “learn” a new skillset.
From Interface to Infrastructure
The story of MCP is part of a larger pattern we’ve seen play out many times before:
- A new interface emerges.Some new capability becomes technically possible—API access, virtual machines, LLMs talking to tools.
- Developers rush to build wrappers.Everyone builds their own version: different formats, conventions, edge-case handling. Innovation happens, but so does redundancy.
- A standard abstracts the annoying stuff.The best ideas coalesce into a shared layer. Integration gets easier. Complexity moves up the stack.
- Everyone builds on the same rails.At this point, the underlying mechanism becomes a given. New entrants use the same primitives. Tool use is no longer a differentiator—it’s expected.
We’re in step 3 with MCP, heading into 4.
And step 4 is when commoditization begins.
Where Moats Will Come From
Tool use was never the hard part, but it has been messy and thankfully is becoming commoditized, just like infrastructure, auth, and routing logic were in past generations of dev tools.
When every model can read/write to the same tools via a shared interface, and when agents can reflect, replan, and query context-rich APIs with minimal glue code… what actually differentiates your product?
If your answer is “our agent can book meetings and update Salesforce,” you’re in trouble.
Focusing on domain knowledge isn’t new, but it’s come back into focus as the primary axis of differentiation.
The Bitter Lesson
Rich Sutton’s Bitter Lesson tells us that general methods that scale with compute tend to outperform handcrafted systems built by domain experts. And yes, we’ve seen that play out. Notably in games (e.g., AlphaGo), computer vision (e.g., AlexNet), and now LLMs (e.g. ChatGPT, Claude).
But here’s the nuance: data is latent domain knowledge.
So the real question isn’t whether domain expertise matters, it’s whether it’s durable and defensible. Durability is based on whether you’re capturing it through hard-coded rules or letting models learn by means of reinforcement learning or similar. We don’t yet have self-supervised agents that can learn complex domains purely from interaction and until then, you have to codify the expertise yourself. That could be through prompt engineering, structured workflows, or RL pipelines that embed domain knowledge into the model itself.
Now, whether domain knowledge is defensible is up for debate. But I think it can be—maybe not forever, but long enough to build a lead that’s hard for new entrants to overcome.
That’s what we did at Petal. We spent five years developing cash flow underwriting models, training them on our portfolio of credit card holders and their behavior. That dataset—real-world performance data on cash flow underwriting outcomes—is what made it possible to credibly launch Prism Data. Few others have access to data like that, and it’s not easy to replicate.
Focusing on domain knowledge isn’t working against the Bitter Lesson—it’s the prerequisite. Even AlphaGo was trained on over 100,000 human Go games to seed the model, which encoded implicit domain expertise. Yes, AlphaGo also simulated many games against itself but until we have agents that can master entire domains through interaction alone, the teams who’ve codified real expertise will have the edge.
Not all problems are like Go. Go is beautifully closed—perfect rules, no randomness, fully observable, with a clear win condition. It’s ideal for self-play and optimization. Most real-world problems aren’t. Many real-world problems are messy, ambiguous, and human (e.g., judgment, emotion, trust).
In verticals like HVAC, for example, data is far more scarce. Building schematics, installation patterns, and the spatial logic of where HVAC systems and related products go—this isn’t readily available, let alone standardized. I could be wrong, but I’m not sure simulating millions of schematics and mapping HVAC routes—without a large corpus of prior analysis from civil and mechanical engineers—is going to work. Unlike the universe of Go or chess games, which are effectively open source, this kind of domain knowledge often lives in PDFs, tribal knowledge, or CAD files locked inside proprietary systems.
As a VC, Here’s What I’m Looking For
- What proprietary data are you generating or capturing?
- How are you embedding expert knowledge into your product, be it through data or otherwise?
- Once MCP makes tool use trivial, what remains that’s uniquely yours?
If the answer is: “We’ve built something nobody else can replicate because it’s informed by years of lived expertise and real-world usage,” I’m listening.
If the answer is: “Our agent can call Notion and write a follow-up email,” I’m not.