

“Set your heart upon your work, but never on its reward.”
Bhagavad Gita

“Set your heart upon your work, but never on its reward.”
Bhagavad Gita
The Model Context Protocol (MCP) has quietly become one of the most important pieces of AI infrastructure. It solves a deceptively simple problem: how do you give an AI model access to tools, data, and services in a standardized way?
Before MCP, every AI integration was custom. Want your agent to query a database? Write a custom tool. Want it to read files? Another custom tool. Want it to interact with GitHub, Slack, Jira, or any other service? Custom integrations, each with their own schema, authentication flow, and error handling.
MCP replaces this with a single protocol.
MCP is a client-server protocol for AI tool use. The core concepts:
The protocol uses JSON-RPC 2.0 over either stdio (for local processes) or HTTP with Server-Sent Events (for remote services).
Here's a complete MCP server that provides a single tool:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "weather",
version: "1.0.0",
});
server.tool(
"get_weather",
"Get current weather for a city",
{ city: z.string().describe("City name") },
async ({ city }) => {
const response = await fetch(
`https://api.weather.example/current?city=${city}`
);
const data = await response.json();
return {
content: [
{
type: "text",
text: `Weather in ${city}: ${data.temperature}°C, ${data.condition}`,
},
],
};
}
);
const transport = new StdioServerTransport();
await server.connect(transport);That's it. This server can now be connected to any MCP-compatible client — Claude Code, Cursor, Windsurf, or your own application. The model sees the tool description, knows how to call it, and interprets the result.
The value of MCP isn't in any single integration — it's in the standardization.
An AI assistant connected to 5 MCP servers has access to all of their tools simultaneously. A code assistant could connect to:
Each server is independent. Adding a new capability means connecting a new server, not modifying the AI application.
An MCP server written for Claude Code works with any other MCP client. The community has built servers for hundreds of services — databases, cloud providers, developer tools, communication platforms. You install them rather than building integrations from scratch.
Each MCP server runs as a separate process with its own permissions. The AI model can't access anything the server doesn't explicitly expose. This is a significant improvement over giving the model direct access to APIs with broad permissions.
Let me walk through building a more practical server: one that provides access to a PostgreSQL database.
Resources let the model inspect available data:
server.resource(
"schema",
"postgres://schema",
async (uri) => {
const tables = await db.query(
"SELECT table_name FROM information_schema.tables WHERE table_schema = 'public'"
);
return {
contents: [
{
uri: uri.href,
mimeType: "application/json",
text: JSON.stringify(tables.rows, null, 2),
},
],
};
}
);When the model needs to understand the database structure, it reads this resource.
Tools let the model take actions:
server.tool(
"query",
"Execute a read-only SQL query",
{
sql: z.string().describe("SQL SELECT query to execute"),
},
async ({ sql }) => {
// Safety: only allow SELECT statements
if (!sql.trim().toUpperCase().startsWith("SELECT")) {
return {
content: [{ type: "text", text: "Error: Only SELECT queries are allowed" }],
isError: true,
};
}
const result = await db.query(sql);
return {
content: [
{
type: "text",
text: JSON.stringify(result.rows, null, 2),
},
],
};
}
);Notice the safety check. The MCP server controls what the model can do — it's the enforcement layer.
Prompts provide reusable templates:
server.prompt(
"analyze_table",
"Analyze the structure and data of a specific table",
{ table: z.string().describe("Table name to analyze") },
async ({ table }) => {
return {
messages: [
{
role: "user",
content: {
type: "text",
text: `Analyze the '${table}' table. Look at its schema, row count, and a sample of data. Identify any potential issues with data quality or schema design.`,
},
},
],
};
}
);The simplest transport. The MCP client spawns the server as a child process and communicates over stdin/stdout:
{
"mcpServers": {
"weather": {
"command": "node",
"args": ["./weather-server.js"]
}
}
}Best for: local development tools, file system access, CLI wrappers.
For servers that run as web services:
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
import express from "express";
const app = express();
const transports = new Map<string, SSEServerTransport>();
app.get("/sse", async (req, res) => {
const transport = new SSEServerTransport("/messages", res);
transports.set(transport.sessionId, transport);
await server.connect(transport);
});
app.post("/messages", async (req, res) => {
const sessionId = req.query.sessionId as string;
const transport = transports.get(sessionId);
await transport.handlePostMessage(req, res);
});
app.listen(3001);Best for: shared services, team-wide tools, cloud-hosted integrations.
Always validate inputs beyond the Zod schema. The schema ensures the right shape; your code ensures the right values:
server.tool(
"search_users",
"Search users by email",
{ email: z.string().email() },
async ({ email }) => {
// Additional validation
const sanitized = email.toLowerCase().trim();
const results = await db.query(
"SELECT id, name, email FROM users WHERE email = $1",
[sanitized]
);
return {
content: [{ type: "text", text: JSON.stringify(results.rows) }],
};
}
);Return errors in a way the model can understand and recover from:
async ({ query }) => {
try {
const results = await search(query);
if (results.length === 0) {
return {
content: [{ type: "text", text: "No results found. Try broader search terms." }],
};
}
return {
content: [{ type: "text", text: JSON.stringify(results) }],
};
} catch (error) {
return {
content: [{ type: "text", text: `Search failed: ${error.message}` }],
isError: true,
};
}
}For servers that call external APIs, add rate limiting and caching to prevent abuse:
const cache = new Map<string, { data: unknown; timestamp: number }>();
const CACHE_TTL = 60_000; // 1 minute
async function cachedFetch(key: string, fetcher: () => Promise<unknown>) {
const cached = cache.get(key);
if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}
const data = await fetcher();
cache.set(key, { data, timestamp: Date.now() });
return data;
}The MCP ecosystem has grown rapidly. A few categories worth noting:
The protocol's adoption was accelerated by Anthropic open-sourcing the specification and SDKs. When major AI labs agree on a standard, the ecosystem follows.
If you're building AI-powered tools, MCP should be your default integration approach. Start with:
MCP isn't revolutionary technology. It's a well-designed protocol that solves a real problem. The best infrastructure is boring infrastructure — and MCP is productively boring.

Google leads in math and science. OpenAI leads in agentic coding. Anthropic leads in economically valuable work. A comprehensive breakdown of every flagship AI model with actual numbers.

With million-token context windows, the challenge shifted from crafting prompts to architecting context. Here's what that looks like in practice.

Developers spent decades wishing for tools that write code. Now they have them. Why does freedom feel like loss?