
MCP vs Agent Skills: Why They're Different, Not Competing
Skills and MCPs solve fundamentally different problems - one excels at information delivery and context management, the other provides structured tool integration. Here's why both matter.
When Agent Skills launched in October 2025, they sparked immediate debate in the AI tooling community. Some prominent voices, including developer Simon Willison, suggested Skills might be "a bigger deal than MCP," noting his waning interest in MCPs due to their token consumption issues. The timing seemed damning for MCP—barely a year after Anthropic introduced the Model Context Protocol in November 2024, a lighter-weight alternative emerged that appeared to solve MCP's biggest problem: context window exhaustion.
I've heard plenty of people discount MCP now, thinking we should all just be using Skills instead. But here's what we've learned after using both technologies side by side: Skills and MCPs aren't competing solutions to the same problem. They're fundamentally different architectures serving different purposes.
Skills excel at information delivery and adaptive context management, functioning as ephemeral clouds of knowledge that LLMs can pull from as needed. MCPs provide structured tool integration, giving LLMs deterministic ways to speak to the outside world through well-defined protocols. Rather than replacement, they serve complementary roles. And with MCP's recent adoption of progressive discovery in January 2026, the original context efficiency advantage that Skills held has disappeared.
What remains are two distinct approaches to two distinct challenges in LLM integration. Think of it like a painter's easel—different brushes for different strokes, different tools for different purposes. This article explains why Skills aren't a replacement for MCP, and why both technologies matter for building the future of AI integration.
What is MCP and Why It Mattered
When Anthropic launched the Model Context Protocol in November 2024, it was genuinely revolutionary. The protocol simplified the entire process of connecting LLMs to the outside world. Before MCP, tool calling was difficult, non-standardized, and inconsistent across platforms. MCP made it dramatically easier and enabled people to build integrations that actually worked.
The impact was immediate. Within months, thousands of MCPs were created and deployed—tens of thousands, even hundreds of thousands. The protocol shipped with SDKs in Python, TypeScript, C#, and Java, making it accessible across the ecosystem. Major platforms adopted it quickly: ChatGPT, Claude, Cursor, Gemini, Microsoft Copilot, and Visual Studio Code all added first-class MCP support. By early 2026, MCP was seeing over 97 million monthly SDK downloads with more than 10,000 active servers deployed.
MCP genuinely revolutionized how we connect LLMs to external systems. That's what MCPs are—and they're brilliant at it.
The Original Context Window Problem
But people quickly realized there were some serious downsides to MCP, especially as more developers got used to how LLMs actually work. The problem? Your context window—that finite space where you can influence what the LLM will output—is precious real estate. It's critical that you manage it carefully and only put things in there that you need, because otherwise you'll get hallucinations and unreliable outputs.
When MCP first launched, it put everything into the context window upfront. Every tool's name, description, output schema, input schema—everything loaded immediately. That was extraordinarily wasteful. A single tool could consume 500 to 900 tokens before you'd even started any actual work.
With only a handful of tools, you could easily end up with no context available to do anything useful.
This became particularly painful for us at Umbraco when we looked at composing tools together to create magic—MCPs where tools could combine to make impossible things happen. That led to a lot of tools. The Umbraco MCP currently has around 345 tools. When you compose all those together, you end up consuming around 30,000 tokens just for tool definitions. That's more than most entire context windows.
This was definitely seen as a critical design flaw that limited MCP usability and scalability.
The Rise of Skills and Progressive Discovery
Skills were introduced in October 2025, almost a year after MCP was created. They launched to considerable acclaim and gained traction incredibly quickly. And the reason Skills caught fire so fast? They had a clever trick up their sleeve.
Core Innovation: Progressive Discovery
Skills introduced progressive discovery. The idea: start with the smallest amount of information possible and put only that into the context window. It's remarkably context-efficient.
At the start, the only information in context about a particular skill is the name and a very short description. That might take up between 20 and 50 tokens out of 200,000. Incredibly efficient. Then when the LLM decides it wants to use a particular skill, it pulls in the skill markdown file and the rest of the instructions linked to it. Brilliant!
In that skill markdown file, you can link to other files within the skill. The LLM only loads those if it decides it needs more information. There might be links in there, and it might decide to go out to the internet. This progressive discovery that Skills have as a key advantage made real waves.
Skills were released as an open standard by Anthropic on December 18, 2025, with enterprise partnerships including Canva, Notion, Figma, and Atlassian providing prebuilt skills. The approach quickly gained traction across agentic tools and platforms. Most agentic tools out there now support Skills.
When Skills first came out, Simon Willison said they could be a bigger deal than MCPs. He expected thousands upon thousands of Skills just like when MCP first launched, making a real impact on software engineering and development. I think that's probably going to happen—we've seen momentum building, and people are discovering how useful they can be.
But I very much disagree that they're a replacement for MCP. I don't think that's the case at all.
MCP's Recent Catch-Up: Progressive Discovery Arrives
In January 2026—just last month—Claude Code introduced something I'd been genuinely waiting for ever since Skills launched. Anthropic took the same progressive discovery trick that made Skills so context-efficient and applied it directly to MCP.
So when you load an MCP now, you get the name and description of each tool—really small and compact, taking up around 20 to 50 tokens each. Exactly the same as Skills. Absolutely incredible.
Then if the LLM decides it wants to use a particular tool for the task at hand, it loads in the input schema, output schema, full description, and everything around it. That means you can host many more MCP tools, and it won't decimate your context window. It makes your conversations far more efficient and practical.
The impact was immediate and measurable. Token overhead dropped by 85%—from around 77,000 tokens to just 8,700 tokens for setups with 50+ tools. Tool calling accuracy improved significantly as well: Claude Opus 4 jumped from 49% to 74% accuracy, while Opus 4.5 went from 79.5% to 88.1%.
For me, this means that initial problem with MCP—the concern that haunted it from launch—has actually been solved. I think MCP has some catching up to do in terms of perception. But that context window problem? It's gone.
Active Context Management: The Real Game-Changer
This trick that Skills and now MCPs both use is something I call active context management. Last year, the phrase was "context engineering"—ensuring your context window only includes what it needs for a given task. That's genuinely hard work to make work well, and it's time-consuming to curate exactly what you want into the context window to support the tasks you have.
Active context management is something different—a step forward. This is what Skills and MCP tools now do: they allow you to put a small amount of information into context, and then if it needs to be used, the rest gets pulled in afterward.
The challenge is a bit of a chicken-and-egg situation. How much information do you put into the context window to make it obvious for the LLM when to trigger it and progressively load the rest? That's really tricky to get right, and that's the real benefit of active context management—nailing that balance.
This active approach enables LLMs to work with dozens or hundreds of tools while minimizing context exhaustion. It's the key innovation that makes modern LLM integration practical at scale.
This Is Where They Diverge
I firmly believe there's a real divergence between Skills and MCPs. It's important to recognize that there are things MCPs are better at doing than Skills, and vice versa. There are areas where you'd choose one over the other. This is where the boundary between determinism and non-determinism becomes the key differentiator.
Skills Are Ephemeral Information Clouds
Skills, by their very nature, are extraordinarily easy to set up. It's literally just a directory with a file in it—that's the minimum you need. They're trivial to build and start using.
But this also means that Skills can be quite ephemeral. They have an element of non-determinism in them, which makes them absolutely amazing—but at the same time, it makes them a little unstructured.
What we're finding is that Skills work best as information clouds. They're things that exist to provide information. They're really about automated context management—bringing in the right context at the right time to improve the LLM's ability to make good decisions across a whole range of tasks.
Skills support progressive discovery beautifully. You start with a single piece of information, and from there, it can link out to related context, code examples, and deeper guidance as needed. The LLM can reach into, pulling out as much or as little as it needs for the task at hand, following the most direct path to exactly the right context at the right time. It's about information efficiency for the LLM, and it's crucial to understand that's one of the most important things about Skills.
Their advantage is their flexibility—their ability to adapt and change, to be used as much or as little as needed. That's really the essence of what Skills are and why they matter.
Skills excel at:
- Supporting diverse, exploratory LLM tasks
- Providing knowledge libraries and best practices
- Delivering automated context management
- Offering information guidance that adapts to the conversation
But they're fundamentally different from what an MCP is.
MCPs Are Defined, Structured Tools
MCP is really about tools. It's about structure. It's about defined, stable architecture. It's about giving an LLM a deterministic way to connect to things.
With Skills, you have that boundary and ability to move between non-determinism and determinism. With MCP, you don't really have that—and that can actually be a really good thing. When you trigger a tool, you know you're triggering it. It's much easier to get tools that compose together to complete agentic flows. Whereas with Skills, that's trickier and less predictable because triggering Skills can be somewhat uncertain.
With MCP, because it's a well-defined and structured architecture, you have SDKs for it. You can build proper structures with testing, helpers, core systems, shared code, and all the infrastructure that comes with mature software development. That's something you really can't do with Skills in the same way. They look similar, but they're actually very different.
You can have MCPs that chain into other MCPs for hierarchical composition. You have MCPs that are hosted on the internet with persistent connections. You can do dynamic tool loading with infrastructure designed for it. You can't do any of that with Skills.
So it may look like Skills are a basic thing—and they're not. Whereas MCP may be seen as enterprise or more grown-up—and that's not what it's about at all. It's about structure. MCPs are structured. They're precise. They work in well-defined ways. They have hard edges. Whereas Skills don't. Skills are there to provide information, and maybe provide a little bit of deterministic information as well.
MCPs provide the technical capabilities that enable:
- Enterprise integrations where reliability matters
- Complex multi-step processes with predictable workflows
- Agentic systems that compose tools together reliably
- Infrastructure with shared code, testing, and helpers
The key strength of MCPs is structure and composability.
Where They Converge: Scripts and Code Execution
But there is a similarity—something that confuses people about Skills and maybe led people to think Skills could replace MCPs. That's the way you can put scripts into Skills and have script files run in the sandboxed environment. On first pass, it looks like these things could replace MCPs.
But it's not really the case.
Scripts in Skills are quite basic. I wouldn't want to create full structures behind them that the skill can trigger, and it's very difficult to share scripts between Skills or fit them into a stable architecture. Skills scripts are really there to allow a level of determinism—to use them for things like reports.
Getting an LLM to create a report on something is difficult because it will give you a different answer every time. But when you need something deterministic, you can run a skill that will return the same data every time it makes that call, and then the LLM can use it with much less likelihood of hallucination.
Dynamic Code Execution: A False Equivalence
With MCP, there's this thing called code execution that emerged in November 2025. The idea is that the LLM generates code on the fly—writing raw API calls, scripts, or queries at runtime rather than invoking pre-built tool endpoints. Instead of calling a well-defined MCP tool like createContent(title, body), the LLM dynamically writes a fetch call or script to hit an API endpoint directly.
Skills are very good at that pattern too—their sandboxed script environment can run LLM-generated code in a similar way. And the problem is, with code execution, you're ignoring all the benefits of systemizing and the difference between an API endpoint and what the LLM is most efficient for.
You lose the optimization of well-defined tool schemas: parameter validation, consistent output formats, and composability between tools. You're ignoring the work of manipulating and shaping API structures specifically to make them LLM-friendly. Raw API calls are often verbose, poorly documented for LLM consumption, and inconsistent.
Both Skills scripts and MCP code execution become less effective when used this way—you trade reliability and composability for flexibility you rarely need.
This is the problem. Skills are brilliant, but they're brilliant at what they do. It's the lack of structure that makes them unsuitable for providing tools that need to be well-defined and highly structured. That's where the difference lies, because MCP is there to provide structure. It exists within a framework, within a collection of tools that are all similar, all there to support the same set of functionality. It's also much easier to compose tools to bring the magic out.
The better approach: Use scripts for deterministic outputs where consistency matters, and use properly structured MCP tools for integration and action. Don't treat dynamic code generation as a substitute for purpose-built tool architecture.
Where Umbraco Uses Skills
We've thought long and hard about this. Given what we've said here, we see Skills as ephemeral, like a cloud of information that the LLM can call on. It can pull as much or as little as it needs to extract the information.
We use Skills for information delivery—functioning as libraries of knowledge and best practices. That information might include code examples. It might include scripts that make small calls somewhere. But it's in no way structured in the way MCPs are.
Structured Skill Sets Aligned With Extension Points
The Skills we have may be structured. There may be lots—like the back office skills we've got. We have a structure of Skills defined to match the extension points in the back office. But there's no shared code between them, and they're really seen as a library of information.
We include real-world code examples in Skills—runnable samples that developers and LLMs can use directly. These aren't just snippets; they're practical, executable examples of proper setup and implementation patterns. This grounds the LLM's output in tested, working code rather than generated approximations, significantly reducing hallucination.
We also provide direct links to Umbraco source code and the UUI (Umbraco UI) component library as authoritative references. Skills point the LLM to actual source repositories and component libraries, grounding responses in the ultimate source of truth rather than potentially outdated or hallucinated documentation. This ensures best practices come directly from Umbraco's own codebase and UI patterns, not from the LLM's general training data.
That's where I see Skills going—there to provide information to the LLM and help manage the context it has given the task at hand.
Where Umbraco Uses MCPs
I don't think anything's going to change in our strategy around MCP now that Skills are part of the agentic world. We're going to continue using MCP to provide tools into Umbraco, to open up Umbraco so you can use an LLM to talk to it, manipulate it, compose it, and use it in amazing and interesting ways.
The Umbraco Developer MCP currently exposes over 330 tools spanning 36 endpoint groups, providing near-complete parity with the Umbraco Management API. An LLM with access to this MCP can create document types, manage media, configure members, set up cultures, define data types—essentially all the operations you'd normally perform through the back office interface or API calls.
We're continuing to build MCPs based on our add-on products and to create MCPs around different use cases. I don't see that changing at all because of Skills.
From our perspective, I see them as quite different things with quite different use cases, and that's definitely going to be apparent in how we use these two separate technologies at Umbraco with LLMs.
Can They Work Together?
I see MCP and Skills both being used at Umbraco and both being leveraged heavily to make using LLMs with Umbraco as easy and fulfilling as possible. I don't see any problem with using them together either.
We have plans to create content modeling Agent Skills to help you develop your content structures and provide information, expertise, and best practices on how to set up and generate your own Umbraco sites.
The Powerful Combination
What you'll have is the skill providing all the knowledge, best practices, and information on how to do it right, and then the MCP actually doing the action. The skill is the brain—it knows what to do, what to create, how to update things, and how to create good structures for content in Umbraco. The MCP is the muscle that actually implements it.
That's where I see things going—an example of collaboration between the two. I see that as genuinely important.
Conclusion
Skills and MCPs are very different things—and both are critically important for the future of how LLMs integrate with and enhance Umbraco.
Skills function as information clouds with flexible context management, excelling at knowledge delivery and adaptive context enrichment. MCPs provide structured tools with reliable composition and integration, excelling at deterministic workflows and system integration.
Claims that Skills replace MCPs misunderstand these architectural differences. Skills adoption doesn't diminish MCP value—they solve different problems. Now that both support progressive discovery, the context efficiency question is settled. What remains are the real differences: structure versus flexibility, determinism versus adaptation, tools versus knowledge.
The future belongs to thoughtful use of both. Neither technology is universally better. Architecture should drive the choice, not hype or convenience. When Skills and MCPs work together, each playing to its strengths, you get the most powerful LLM integration possible.
So when evaluating these technologies for your use case, don't ask "Which one is better?" Ask "Do I need information delivery or tool execution?" The answer to that question will point you in the right direction.
Enjoyed this post?
Subscribe to get new posts delivered to your inbox.