Part I: for Thought
Every so often a simple idea rewires everything. The shipping container didn’t just optimise logistics; it flattened the globe, collapsed time zones, and rewrote the economics of trade. In its geometric austerity was a quiet revolution: standardisation.
Similarly, HTML and HTTP didn’t invent information exchange — any more than the shipping crate invented trade — but by imposing order on chaos, they transformed it.
RESTful APIs, for their part, standardised software-web interaction, and made services programmable. The web became not just browsable, but buildable — a foundation for automation, orchestration, and integration, and entire industries sprang up around that idea.
Now, the ‘agentic web’—where AI agents call APIs and other AI agents—needs its own standards.
This isn’t just an extension of the last era — it’s a shift in how computation works.
Two promising approaches have emerged for agent-web interaction: Model Context Protocol (MCP) and Invoke Network.
- Model Context Protocol (MCP): a communication standard designed for chaining reasoning across multiple agents, tools, and models.
- Invoke Network: a lightweight, open-source framework that lets models interact directly with real-world APIs at inference time — without needing orchestration, backends, or agent registries.
This essay compares these two paradigms — MCP and Invoke Network (disclosure: I’m a contributor to Invoke Network) — and argues that agentic interoperability will require not just schemas and standards, but simplicity, statelessness, and runtime discovery.
Part II: Model Context Protocol: Agents That Speak the Same Language
Origins: From Local Tools to Shared Language
Model Context Protocol (MCP) emerged from a simple, powerful idea: that large language models (LLMs) should be able to talk to each other — and that their interactions should be modular, composable, and inspectable.
It began as part of the AI Engineer community on GitHub and Twitter — a loose but vibrant collective of developers exploring what happens when models gain agency. Early projects like OpenAgents and LangChain had already introduced the idea of tools: giving LLMs controlled access to functions. But MCP pushed the idea further.
Rather than hardcoding tools into individual agents, MCP proposed a standard — a shared grammar — that would allow any agent to dynamically expose capabilities and receive structured, interpretable requests. The goal: make agents composable and interoperable. Not just one agent using a tool, but agents calling agents, tools calling tools, and reasoning passed like a baton between models.
Introduced by Anthropic in November 2024, MCP is not a product. It’s a protocol. A social contract for how agents communicate — much like HTTP was for web pages.
How MCP Works
At its core, MCP is a JSON-based interface description and call/response format. Each agent (or tool, or model) advertises its capabilities by returning a set of structured functions — similar to an OpenAPI schema, but tailored for LLM interpretation.
A typical MCP exchange has three parts:
- Listing Capabilities
An agent exposes a set of callable functions — their names, parameters, return types, and descriptions. These can be real tools (like get_weather) or delegations to other agents (like research_topic). - Issuing a Call
Another model (or the user) sends a request to that agent using the defined format. MCP keeps the payloads structured and minimal, avoiding ambiguous natural language where it matters. - Handling the Response
The receiving agent executes the function (or prompts another model), and returns a structured response, often annotated with rationale or follow-up context.
This sounds abstract, but it’s surprisingly elegant in practice. Let’s look at a real example — and use it to draw out the strengths and limits of MCP.
A Worked MCP Example: Agents That Call Each Other
Let’s imagine two agents:
- WeatherAgent: Provides weather data.
- TripPlannerAgent: Plans a day trip, and uses the WeatherAgent via MCP to check the weather.
In this scenario, TripPlannerAgent has no hardcoded knowledge of how to fetch weather. It simply asks another agent that speaks MCP.
Step 1: WeatherAgent describes its capabilities
{
"functions": [
{
"name": "get_weather",
"description": "Returns the current weather in a given city",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The city to get weather for"
}
},
"required": ["city"]
}
}
]
}
This JSON schema is MCP-compliant. Any other agent can introspect this and know exactly how to invoke the weather function.
Step 2: TripPlannerAgent makes a structured call
{
"call": {
"function": "get_weather",
"arguments": {
"city": "San Francisco"
}
}
}
The agent doesn’t need to know how the weather is fetched — it just needs to follow the protocol.
Step 3: WeatherAgent responds with structured data
{
"response": {
"result": {
"temperature": "21°C",
"condition": "Sunny"
},
"explanation": "It’s currently sunny and 21°C in San Francisco."
}
}
TripPlannerAgent can now use that result in its own logic — maybe suggesting a picnic or a museum day based on the weather.
What This Enables
This tiny example demonstrates several powerful capabilities:
✅ Agent Composition — agents can call other agents as tools
✅ Inspectability — capabilities are defined in schemas, not prose
✅ Reusability — agents can serve many clients
✅ LLM-native design — responses are still interpretable by models
But MCP has its limits — which we’ll explore next.
When to Use MCP (And When Not To)
Model Context Protocol (MCP) is elegant in its simplicity: a protocol for describing tools and delegating tasks between agents. But, like all protocols, it shines in some contexts and struggles in others.
✅ Where MCP Excels
1. LLM-to-LLM Communication
MCP was designed from the ground up to support inter-agent calls. If you’re building a network of AI agents that can call, query, or consult one another, MCP is ideal. Each agent becomes a service endpoint, with a schema that other agents can reason about.
2. Decentralised, Model-Agnostic Systems
Because MCP is just a schema convention, it doesn’t depend on any particular runtime, framework, or model. You can use OpenAI, Claude, or your local LLM — if it can interpret JSON, it can speak MCP.
3. Multi-Hop Planning
MCP is especially powerful when combined with a planner agent. Imagine a central planner that orchestrates workflows by dynamically selecting agents based on their schemas. This enables highly modular, dynamic systems.
❌ Where MCP Struggles
1. No Real “Runtime”
MCP is a protocol — not a framework. It defines the interface, but not the execution engine. That means you need to implement your own glue logic for:
- Auth
- Input/output mapping
- Routing
- Error handling
- Retries
- Rate limits
MCP doesn’t manage that for you — it’s just the language agents use to communicate.
2. Requires Structured Thinking
LLMs love ambiguity. MCP doesn’t. It forces developers (and models) to be explicit: here’s a tool, here’s its schema, here’s how to call it. That’s great for clarity — but requires more upfront thinking than, say, slapping .tools = […] on an OpenAI agent.
3. Tool Discovery and Versioning
MCP is still early — there’s no central registry of agents, no real system for versioning or namespacing. In practice, developers often pass around schemas manually or hardcode references.
Use Case | Should You Use MCP? |
Agent calling another agent | ✅ Perfect fit |
Building a large, modular agent network | ✅ Ideal |
Call a REST API or webhook | ❌ Overkill |
Need built-in routing, OAuth, retries | ❌ Use a framework |
Tool discovery at inference time | ❌ Use Invoke Network |
And this is where Invoke Network enters — not as a competitor, but as a counterpart. If MCP is like WebSockets for agents (peer-to-peer, structured, low-level), then Invoke is like HTTP — a fire-and-forget API surface for LLMs.
Part III: Invoke Network
HTTP for LLMs
While MCP emerged to coordinate agents, Invoke was born from a simpler, sharper pain: the chasm between LLMs and the real world.
Language models can reason, write, and plan — but without tools, they’re sealed in a sandbox. Invoke began with a question:
What if any LLM could discover and use any real-world API, just like a human browses the web?
Existing approaches — MCP, OpenAI functions, AgentOps — while powerful, were either bloated, too rigid, or fragile for the vision. Tool use felt like duct-taping SDKs to natural language. Models had to be pre-wired to scattered tools, each with their own quirks.
Invoke approached it differently:
- One tool, many APIs
- One standard, infinite interfaces
Just define your endpoint — method, URL, parameters, auth, example — in clean, readable JSON. That’s it. Now any model (GPT, Claude, Mistral) can call it naturally, securely, and repeatedly.
⚙️ How Invoke Works
At its core, Invoke is a tool router built for LLMs. It’s like openapi.json, but leaner — built for inference, not engineering.
Here’s how it works:
- You write a structured tool definition (agents.json) with:
- method, url, auth, parameters, and an example.
- Invoke parses that into a callable function for any model that supports tool use.
- The model sees the tool, decides when to use it, and fills out the parameters.
- Invoke handles the rest — auth, formatting, execution — and returns the result.
No custom wrappers. No chains. No scaffolding. Just one clean interface. And if something goes wrong? We don’t preprogram retries. We let the model decide. Turns out: it’s pretty good at it.
Here’s a real example:
{
"agent": "openweathermap",
"label": "🌤 OpenWeatherMap API",
"base_url": "https://api.openweathermap.org",
"auth": {
"type": "query",
"format": "appid",
"code": "i"
},
"endpoints": [
{
"name": "current_weather",
"label": "☀️ Current Weather Data",
"description": "Retrieve current weather data for a specific city.",
"method": "GET",
"path": "/data/2.5/weather",
"query_params": {
"q": "City name to retrieve weather for (string, required)."
},
"examples": [
{
"url": "https://api.openweathermap.org/data/2.5/weather?q=London"
}
]
}
]
}
From the model’s point of view, this is one use of the ‘Invoke’ tool, not a new one for each added endpoint. Not a custom plugin. Just one discoverable interface. This means the model can discover APIs on the fly, just like humans browse the web. To use the shipping container analogy, if MCP allows highly coordinated workflows with tightly orchestrated infrastructure between select ports, Invoke enables you to send any package, anywhere, any time.
Now we can use:
# 1. Install dependencies:
# pip install langchain-openai invoke-agent
from langchain_openai import ChatOpenAI
from invoke_agent.agent import InvokeAgent
# 2. Initialize your LLM and Invoke agent
llm = ChatOpenAI(model="gpt-4.1")
invoke = InvokeAgent(llm, agents=["path-or-url/agents.json"])
# 3. Chat loop—any natural-language query that matches your agents.json
user_input = input("📝 You: ").strip()
response = invoke.chat(user_input)
print("🤖 Agent:", response)
In under a minute, your model is fetching live data—no wrappers, no boilerplate.
You’ll say: “Check weather in London.”
The agent will go to openweathermap.org/agents.json, read the file, and just… do it.
Just as robots.txt
let crawlers safely navigate the web, agents.json lets LLMs safely act on it. Invoke turns the web into an LLM-readable ecosystem of APIs. Much like HTML allowed humans to discover websites and services on the fly, Invoke allows LLMs to discover APIs at inference time.
Want to see this in action? Check out the Invoke repo’s example notebooks, see how to define an agents.json
, wire up auth, and call APIs from any LLM in under a minute. (Full “network”-style discovery is on the roadmap once adoption reaches critical mass.)
When to Use Invoke: Strengths and Tradeoffs
Invoke shines brightest in the real world.
Its core premise — that a model can call any API, securely and accurately, from a single schema — unlocks a staggering range of use cases: calendar assistants, email triage, weather bots, automation interfaces, customer support agents, enterprise copilots, even full-stack LLM-powered workflows. And it works out of the box with OpenAI, Claude, LangChain, and more.
Strengths:
- Simplicity. Define a tool once, use it everywhere. You don’t need a dozen Python wrappers or agent configs.
- Model-agnostic. Invoke works with any model that supports structured tool use — including open-source LLMs.
- Open & extensible. Serve tools from local config, hosted registries, or future public endpoints (example.com/agents.json).
- Composable. Models can reason over tool metadata, inspect auth requirements, and even decide when to explore new capabilities.
- Developer-focused. Unlike agentic frameworks that require complex orchestration, Invoke slots neatly into existing stacks — frontends, backends, workflows, RAG pipelines, and more.
- Context-efficient. API configurations are defined in the execution chain and do not use precious context.
- Discoverable at runtime. Invoke connections are not hard-wired at compile and are not limited at setup.
But like any system, Invoke has tradeoffs.
Limitations:
- No central memory or state. It doesn’t manage long-term plans, context windows, or recursive subtasks. That’s left to you — or to other frameworks layered on top.
- No retries, timeouts, or multi-step workflows baked in. Invoke trusts the model to handle partial failure. In practice, GPT-4 and Claude do this remarkably well — but it’s still a philosophical choice.
- Statelessness. Tools are evaluated per invocation. While this keeps things clean and atomic, it may not suit complex, multi-step agents without additional scaffolding.
MCP vs. Invoke: Two Roads Into the Agentic Web
Both MCP and Invoke aim to bring LLMs into contact with the real world — but they approach it from opposite directions.
Feature | Model Context Protocol (MCP) | Invoke |
Core Goal | Agent-to-agent coordination via message passing | LLM-to-API integration via structured tool use |
Design Origin | Comparable to protocols like WebSockets and JSON-RPC | Inspired by REST/HTTP and OpenAPI |
Primary Use Case | Composing multi-agent workflows and message pipelines | Connecting LLMs directly to real-world APIs |
Communication Style | Stateful sessions and messages exchanged between agents | Stateless, schema-driven tool calls |
Tool Discovery | Agents must be pre-wired with capabilities | Tool schemas can be discovered at runtime (agents.json) |
Error Handling | Delegated to agent frameworks or orchestration layers | Handled by the model, optionally guided by context |
Dependencies | Requires MCP-compatible infra and agents | Just needs model + JSON tool definition |
Composable With | AutoGPT-style ecosystems, custom agent graphs | LangChain, OpenAI tool use, custom scripts |
Strengths | Fine-grained control, extensible routing, agent memory | Simplicity, developer ergonomics, real-world compatibility |
Limitations | Heavier to implement, requires full stack, context bloat, tool overload | Stateless by design, no agent memory or recursion |
Conclusion: The Shape of the Agentic Web
We are witnessing the emergence of a new layer of the internet — one defined not by human clicks or software calls, but by autonomous agents that reason, plan, and act.
If the early web was built on human-readable pages (HTML) and programmatic endpoints (REST), the agentic web demands a new foundation: standards and frameworks that let models interact with the world as fluidly as humans once did with hyperlinks.
Two approaches — Model Context Protocol and Invoke — offer different visions of how this interaction should work:
- MCP is ideal when you need coordination between multiple agents, session state, or recursive reasoning — the WebSockets of the agentic web.
- Invoke is ideal when you need lightweight, one-shot tool use with real-world APIs — the HTTP of the agentic web.
Neither is the solution. Much like the early internet needed both TCP and HTTP, the agentic layer will be pluralistic. But history suggests something: the tools that win are the ones that are easiest to adopt.
Invoke is already proving useful to developers who just want to connect an LLM to the services they already use. MCP is laying the groundwork for more complex agent systems. Together, they’re sketching the contours of what’s to come.
The agentic web won’t be built in a single day, and it won’t be built by a single company. But one thing is clear: the future of the web is no longer just human-readable or machine-readable — it’s model-readable.
About the author
I’m a lead researcher at Commonwealth Bank AI Labs and contributor to Invoke Network, an open-source agentic-web framework. Feedback and forks welcome: https://github.com/mercury0100/invoke