Skip to main content

Command Palette

Search for a command to run...

Understanding Model Context Protocol (MCP)

To Supercharge AI Development

Updated
6 min read
Understanding Model Context Protocol (MCP)

Ever found yourself wishing your AI assistant understood your codebase like you do? Or that it could help navigate your local filesystem without constantly asking for context? The Model Context Protocol (MCP) is solving these exact pain points, and it's changing how we interact with AI assistants in development environments.

As a developer who's been exploring this space since MCP emerged, I've seen firsthand how it bridges the gap between AI models and our local development environments. Let's break down what MCP is, why it matters, and how you can leverage it in your own workflow.

What is MCP?

At its core, MCP (Model Context Protocol) is an open protocol that enables AI assistants to request and interact with contextual information from your local environment. Before MCP, AI assistants were isolated from your working environment - they couldn't see your files, understand your project structure, or access your terminal.

MCP changes that by creating standardized ways for AI models to request information from your local environment, execute commands, and maintain context throughout a conversation. Think of it as building a bridge between the AI assistant and your development environment.

The MCP Architecture

MCP involves four main components working together:

MCP Architecture Diagram

1. MCP Client

What it is: The client is typically integrated into your development environment (IDE, code editor, terminal).

Role: It acts as the intermediary between your local environment and the AI assistant. The client receives requests from the MCP host and can:

  • Access your filesystem

  • Run terminal commands

  • Provide project structure

  • Execute code

Example: VSCode plugins like Cursor, GitHub Copilot, or extensions for JetBrains IDEs that implement the MCP protocol.

2. MCP Host

What it is: The AI assistant or large language model (LLM) that you're communicating with.

Role: The host makes requests to the client for information about your environment. It:

  • Generates AI responses

  • Requests specific contextual information when needed

  • Processes the returned context to provide better assistance

Example: Claude, GPT-4, or other LLMs that have been configured to use the MCP protocol.

3. Service Provider

What it is: The component that implements the various MCP services.

Role: It bridges between the client's capabilities and the services themselves:

  • Registers available services

  • Routes requests to appropriate services

  • Handles permissions and security

  • Manages connection state

Example: The core MCP implementation in your IDE extension that knows how to interface with the various services.

4. MCP Services

What they are: Specific capabilities or functionalities offered through MCP.

Role: Each service provides a specific type of context or capability:

  • File access service for reading/writing files

  • Terminal service for executing commands

  • Language service for code understanding

  • and more

Example: A filesystem service that lets the AI read your project files or a terminal service that can run git commands.

Context Types in MCP

MCP supports several types of context that make AI assistants more useful:

1. Sampling Context

This allows the AI to see snippets or samples of your code or data. Instead of uploading your entire codebase, sampling context lets the AI request specific pieces of information, like:

  • The contents of the current file

  • Code snippets matching a search pattern

  • Relevant documentation

Sampling context is extremely useful for getting the AI to understand your specific code style or implementation details without having to explain everything.

2. Resource Context

This gives the AI access to resources in your environment, such as:

  • File system navigation

  • Project structure

  • Configuration files

  • Environment variables (when permitted)

Resource context helps the AI understand the architecture and organization of your project.

3. Tool Context

This allows the AI to use tools in your environment:

  • Running terminal commands

  • Executing code snippets

  • Using debuggers

  • Accessing version control information

Tool context turns your AI assistant from a passive advisor into an active collaborator, capable of taking actions in your environment.

4. Prompt Context

This provides additional information about how you want the AI to behave:

  • Project-specific conventions

  • Response formatting preferences

  • Domain-specific knowledge

  • Working styles and preferences

Prompt context helps personalize the AI's responses to your needs and project requirements.

Reflections in MCP

Reflections are a powerful feature in MCP that allows the AI to "reflect" on the information it has and make informed requests for additional context.

For example, if you ask the AI to debug an error in your code, it might:

  1. Reflect on what it knows about the error

  2. Request the file where the error occurs

  3. Request related files that might be contributing to the error

  4. Request terminal output for additional clues

Reflections make the AI conversation more dynamic and effective, as the AI can actively seek out the information it needs rather than asking you to provide everything upfront.

Building Your Own MCP Server: A Practical Example

Example MCP Server with Tools (STDIO + SSE)

Ref:

https://www.anthropic.com/news/model-context-protocol
https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-typescript

Here’s a real-world example of an MCP server implemented using the official @modelcontextprotocol/sdk. This setup includes:

  • Two weather tools

  • STDIO transport for local CLI-based interaction

  • Notes on how to switch to SSE transport

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const NWS_API_BASE = "https://api.weather.gov";
const USER_AGENT = "weather-app/1.0";

// Helper function for making NWS API requests
async function makeNWSRequest<T>(url: string): Promise<T | null> {
  const headers = {
    "User-Agent": USER_AGENT,
    Accept: "application/geo+json",
  };

  try {
    const response = await fetch(url, { headers });
    if (!response.ok) throw new Error(`HTTP error! status: ${response.status}`);
    return (await response.json()) as T;
  } catch (error) {
    console.error("Error making NWS request:", error);
    return null;
  }
}

function formatAlert(feature: any): string {
  const props = feature.properties;
  return [
    `Event: ${props.event || "Unknown"}`,
    `Area: ${props.areaDesc || "Unknown"}`,
    `Severity: ${props.severity || "Unknown"}`,
    `Status: ${props.status || "Unknown"}`,
    `Headline: ${props.headline || "No headline"}`,
    "---",
  ].join("\n");
}

const server = new McpServer({
  name: "weather",
  version: "1.0.0",
});

// Tool 1: Get Alerts
server.tool(
  "get-alerts",
  "Get weather alerts for a state",
  {
    state: z.string().length(2).describe("Two-letter state code (e.g. CA, NY)"),
  },
  async ({ state }) => {
    const alertsUrl = `${NWS_API_BASE}/alerts?area=${state.toUpperCase()}`;
    const data = await makeNWSRequest(alertsUrl);
    if (!data || !data.features?.length) {
      return {
        content: [{ type: "text", text: `No active alerts for ${state}` }],
      };
    }
    const text = data.features.map(formatAlert).join("\n");
    return {
      content: [{ type: "text", text: `Active alerts for ${state}:

${text}` }],
    };
  }
);

// Tool 2: Get Forecast
server.tool(
  "get-forecast",
  "Get weather forecast for a location",
  {
    latitude: z.number().min(-90).max(90),
    longitude: z.number().min(-180).max(180),
  },
  async ({ latitude, longitude }) => {
    const pointsUrl = `${NWS_API_BASE}/points/${latitude.toFixed(4)},${longitude.toFixed(4)}`;
    const points = await makeNWSRequest(pointsUrl);
    if (!points || !points.properties?.forecast) {
      return {
        content: [{ type: "text", text: "Forecast location not available." }],
      };
    }
    const forecast = await makeNWSRequest(points.properties.forecast);
    if (!forecast || !forecast.properties?.periods?.length) {
      return {
        content: [{ type: "text", text: "No forecast data available." }],
      };
    }
    const text = forecast.properties.periods.map((p: any) =>
      `${p.name}:
Temperature: ${p.temperature}°${p.temperatureUnit}
Wind: ${p.windSpeed} ${p.windDirection}
${p.shortForecast}\n---`
    ).join("\n");

    return {
      content: [{ type: "text", text: `Forecast for ${latitude}, ${longitude}:

${text}` }],
    };
  }
);

// STDIO Transport (for local CLI or Cursor IDE)
async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error("✅ Weather MCP Server running using STDIO transport");
}

main().catch((err) => {
  console.error("Fatal error:", err);
  process.exit(1);
});

🔁 Switching to SSE Transport

To switch to SSE for use in a hosted environment:

import { SseServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";

async function main() {
  const transport = new SseServerTransport({ port: 8080 });
  await server.connect(transport);
  console.log("🌐 Weather MCP Server running over SSE on port 8080");
}

You can now deploy this server and have AI clients (like an LLM chat or dev tool) call your tools over the web.