- If your agent doesn't have a full Bash-style code execution environment it can't run skills. MCP is a solid option for wiring in tools there.
- MCP can help solve authentication, keeping credentials for things in a place where the agent can't steal those credentials if it gets compromised. MCPs can also better handle access control and audit logging in a single place.
- MCPs are trivial to write and maintain - at least in my experience and language of choice - and bash scripts are cursed. But I guess you can use a different scripting language.
- Agents can pollute their context by reading the script. I want to expose a black box that just works.
The LLM can look at the OpenAPI spec and construct queries - I often do this pretty easily.
Skills with an API exposed by the service usually means your coding agent can access the credentials for that service. This means that if you are hit by a prompt injection the attacker can steal those credentials.
As the article states, LLMs are fantastic at writing code, and not so good at issuing tool calls.
tbh, that companies tried to make something proprietary of this concept is probably why its adoption has been weak and why we have "MCP vs CLI/Skills/etc" debates in the first place. In contrast, CLI tools only require a general a bash shell (potentially in a sandbox environment), which is very standardised.
Also even with above there is more opportunity for the bot to go off piste and run cat this and awk that. Meanwhile the "operator" i.e. the Grandpa who has an iPhone but never used a computer has no chance of getting the bot back on track as he tries to renew his car insurance.
"Just going to try using sed to get the output of curl https://.."
"I don't understand I just want to know the excess for not at fault incident when the other guy is uninsured".
Everyone has gone claw-brained. But it really is ok to write code and save that code to disk and execute thay code later.
You can use MCP or even just hard coded API call from your back end to the service you wanna use like it's 2022.
With MCP you can at least set things up such that the agent can't access the raw credentials directly.
Your argument is the same for an MCP server - auth is stored somewhere on disk, what's to stop it from reading that file? The answer is the same as above.
python -c '
print(open("~/.kube/config.txt").read())
'
The point I'm making here is that with an MCP you can disable shell access entirely, at which point the agent cannot read credential files that it's not meant to be able to access.My argument here is that one of the reasons to use MCP is that it allows you to build smaller agents that do not have a full code execution environment, and those agents can then use MCPs to make calls to external services without revealing those credentials to the agent.
I think we both agree that if your agent has full Bash access it can access credentials.
The approach you're proposing is that with a well designed MCP server, you can limit the permissions for your agent to only interacting with that MCP server, essentially limiting what it can do.
My argument is that you can accomplish the identical thing with an agent by limiting access to only invoking a specific CLI tool, and nothing more.
Both of our approaches accomplish the same thing. I'm just arguing that an MCP server is not required to accomplish it.
But... if you're going all-in on the Bash/Python/arbitrary-programming-language environments that are necessary to get Skills to work, you're going to find yourself in a position where the agent can probably read config files that you don't want it to see.
(Moved from wrong sub)
Also, I run programs on my machine with a different privilege level than myself all the time. Why can’t an agent do that?
Better sandboxing. Accessing an MCP server doesn't require you to give an agent permissions on your local machine.
MCP servers can expose tools, resources, and prompts. If you're using a skill, you can "install" it from a remote source by exposing it on the MCP server as a "prompt". That helps solve the "keep it updated" problem for skills - it gets updated by interrogating the MCP server again.
Or if your agentic workflow needs some data file to run, you can tell the agent to grab that from the MCP server as a resource. And since it's not a static file, the content can update dynamically -- you could read stocks or the latest state of a JIRA ticket or etc. It's like an AI-first, dynamic content filesystem.
There'd be a little extra friction compared to MCP – the agent would presumably have to find and download and read the OpenAPI/Swagger spec, and the auth story might be a little clunkier – but you could definitely do it, and I'm sure many people do.
Beyond that, there are a few concrete things MCP provides that I'm a fan of:
- first-class integration with LLM vendors/portals (Claude, ChatGPT, etc), where actual customers are frequently spending their time and attention
- UX support via the MCP Apps protocol extension (this hasn't really entered the zeitgeist yet, but I'm quite bullish on it)
- code mode (if using FastMCP)
- lots of flexibility on tool listings – it's trivial to completely show/hide tools based on access controls, versus having an AI repeatedly stumble into an API endpoint that its credentials aren't valid for
I could keep going, but the point is that while it's possible to use another tool for the job and get _something_ up and running, MCP (and FastMCP, as a great implementation) is purpose built for it, with a lot of little considerations to help out.
Then you’d need a way of passing all that info on to a model, so something top level.
It’d be useful to do things in the same way as others (so if everyone is adding Openapi/swagger you’d do the same if you didn’t have a reason not to).
And then you’ve just reinvented something like MCP.
It’s just a standardised format.
Why? Isn't obvious to me..
When a human is coding against a traditional API, it might be a bit annoying if the API has four or five similar-sounding endpoints that each have a dozen parameters, but it's ultimately not a showstopper. You just spend a little extra time in the API docs, do some Googling to see what people are using for similar use cases, decide which one to use (or try a couple and see which actually gets you what you want), commit it, and your script lives happily ever after.
When an AI is trying to make that decision at runtime, having a set of confusing tools can easily derail it. The MCP protocol doesn't have a step that allows it to say "wait, this MCP server is badly designed, let me do some Googling to figure out which tool people are using for similar use cases". So it'll just pick whichever ones seems most likely to be correct, and if it's wrong, then it's just wasted time and tokens and it needs to try the next option. Scaled up to thousands or millions of times a day, it's pretty significant.
There's a lot of MCP servers out there that are just lazy mappings from OpenAPI/Swagger specs, and it often (not always, to be fair) results in a clunky, confusing mess of tools.
A skill is, at the end of the day, just a prompt.
A skill can also act as an abstraction layer over many tools (implemented as an mcp server) to save context tokens.
Skills offer a short description of their use and thus occupy only a few hundled tokens in the context compared to thousends of tokens if all tools would be in the context.
When the LLM decides that the skill is usefull we can dynamically load the skills tools into the context (using a `load_skill` meta-tool).
... most standard desktop software? How do you interact with Blender, Unity3D, Ableton Live, Photoshop, DaVinci Resolve, ... when they don't provide any API or programmatic CLI and don't want to open random network ports but will be absolutely fine with opening a tightly controlled local channel through a fd / pipe like MCP uses ?
Seems like unnecessarily constraining it.
On the other hand, something like context7 is just `npx ctx7 resolve <lib>` then `npx ctx7 docs <id>` — two stateless shell calls, done. No server to maintain, no protocol overhead. CLI is the right tool there.
> FastMCP is the standard framework for building MCP applications
Standardized by whom?In an era where technology exists that can lend the appearance of legitimacy to just about anyone, that kind of statement needs to be qualified.
UPDATE: I was wrong about this, see comment reply. The python-sdk in https://github.com/modelcontextprotocol is a fork of FastMCP.
My read of what happened is that the author spiked an an initial the implementation of 'fastmcp' on Nov 30 2025, 5 days later, the author relicensed it to MIT, and donated it to the python sdk (10 days after anthropic announced MCP):
https://github.com/PrefectHQ/fastmcp/pull/54
It was incorporated on Dec 21 2024, and hardened through the efforts of one of the python-sdk maintainers.
The author seemingly abandoned the github project shortly after donating it to the python-sdk and marked it as unmaintained, and it remained so for several months (there are roughly zero commits between jan-april):
https://github.com/PrefectHQ/fastmcp/issues/96
He also apparently has made almost no other contributions to the mcp python-sdk:
https://github.com/modelcontextprotocol/python-sdk/commits?a...
Many contributors to the python sdk continued to iterate on the mcp server implementation using the name fastmcp ( since it had been donated to the project ) resulting in growing interest:
https://trends.google.com/explore?q=fastmcp%20&date=2024-12-...
Then around April 2025, the author likely noticing the growing interest and stickyness of the name, decided to write a new version and start using the name fastmcp again.
https://github.com/PrefectHQ/fastmcp/graphs/contributors?fro...
The author clearly made an attempt to promote his effort:
https://www.reddit.com/r/mcp/comments/1np6dwg/fastmcp_20_is_...
This resulted in a lot of confusion by users, which persists to this day. I only looked into this last year, because i was one of those users who was suddenly confused regarding the provenance of what i was actually using vs what i thought i was using; and as i looked into it i was suddenly seeing lots of questionable reddit comments pop up in subreddits i was reading, all evangelizing fastmcp 2.0 and using language that was contributing to the confusion.
The author's interest in monetizing the fastmcp github repo is understandable, and he and others have clearly put alot of effort into iterating in his SaaS onramp, but the confusion arises simply because the author wanted to capitalize on the success of mcp and on the popularity of the fastmcp name, the initial growth and popularity of which was primarily driven by the effort and support of contributors to the mcp python sdk .
For client side MCP it's a different story.
I think MCP is fine in an env where you have no access to tools, but you cannot ripgrep your way through an MCP (unless you make an MCP that calls ripgrep on e.g. a repo, which in that case what are you doing).
So what I've found to be useful or even critical is treating dependency changes as "authority changes." What I mean is upgrades and new transititive deps shouldn't be in the same permissions bucket as "normal" execution. You want to isolate the install/update into a separate job or identity with no access to production secrets. Secondly require an explicit allowlist or signed artifact for packages in the execution enviornemnt. Third, log who/hwat authorized this new code to run as a first-class audit event.
If agents are going to operate as we are tyring them to (unattended) then the question isn't only was the package malicious but it's also why was any unattended actor allowed to do what it did. Isn't this within our best interest?
Is there some sort of tool that can be expressed as an MCP and but not as an API or CLI command? Obviously we shouldnt map existing apis to MCP tools, but why would I used an MCP over just writing a new "agentic ready" api route?