MCP Fundamentals
Pillar Guide

What Is an MCP Server? How It Works + Examples

Learn what MCP servers are, how they expose tools/resources/prompts to AI applications, and see real-world examples of popular MCP servers.

20 min read
Updated February 25, 2026
By MCP Server Spot

What Is an MCP Server?

An MCP server is a lightweight program that wraps external tools, data sources, or services and exposes them to AI applications through the standardized Model Context Protocol. It acts as a bridge between what an AI model wants to do and the systems that can actually do it.

Consider a practical analogy: an MCP server is like a specialized translator at the United Nations. The AI model speaks one language (natural language reasoning), and the external system speaks another (REST APIs, SQL queries, CLI commands). The MCP server sits between them, translating requests and responses through a standardized protocol that both sides understand.

In concrete terms, an MCP server:

  • Declares its capabilities -- It tells the AI client exactly what tools, resources, and prompts it offers
  • Accepts standardized requests -- It receives JSON-RPC 2.0 messages from MCP clients
  • Executes actions -- It performs the actual work (API calls, database queries, file operations)
  • Returns structured results -- It sends back responses in a format the AI can understand

How MCP Servers Work: Step by Step

The Connection Flow

When an AI application like Claude Desktop starts up with MCP servers configured, here is what happens:

  1. Server Launch: The host application starts each configured MCP server (as a child process for stdio, or connects to a remote URL for HTTP)
  2. Handshake: The MCP client inside the host sends an initialize request to the server
  3. Capability Exchange: The server responds with its name, version, and what it supports (tools, resources, prompts)
  4. Discovery: The client calls tools/list, resources/list, and/or prompts/list to get the full catalog of capabilities
  5. Ready: The server's tools are now available to the AI model as part of its context
User types a question
        │
        ▼
┌─────────────────┐
│   AI Host App   │   Adds tool descriptions to model context
│  (Claude, etc.) │
└────────┬────────┘
         │  User message + tool descriptions → LLM
         ▼
┌─────────────────┐
│    AI Model     │   Decides: "I need to call search_code"
│   (Claude 4)    │
└────────┬────────┘
         │  Tool call request
         ▼
┌─────────────────┐
│   MCP Client    │   Sends JSON-RPC request to correct server
└────────┬────────┘
         │
         ▼
┌─────────────────┐
│   MCP Server    │   Executes the tool (e.g., searches GitHub)
│    (GitHub)     │
└────────┬────────┘
         │  Result
         ▼
┌─────────────────┐
│    AI Model     │   Incorporates result and generates response
└────────┬────────┘
         │
         ▼
    User sees answer

The Three Primitives

Every MCP server can expose up to three types of capabilities, known as MCP Building Blocks:

Tools (Model-Controlled)

Tools are functions that the AI model can decide to call. They are the most common primitive and the primary way MCP servers add capabilities to AI systems.

// Example: A tool that searches a codebase
server.tool(
  "search_code",
  "Search for code patterns in the repository",
  {
    query: z.string().describe("The search pattern or regex"),
    fileType: z.string().optional().describe("Filter by file extension (e.g., '.ts', '.py')"),
    maxResults: z.number().default(10).describe("Maximum number of results to return"),
  },
  async ({ query, fileType, maxResults }) => {
    const results = await performCodeSearch(query, fileType, maxResults);
    return {
      content: [{
        type: "text",
        text: formatSearchResults(results),
      }],
    };
  }
);

Key characteristics of tools:

  • Model-controlled: The AI model decides when and how to invoke them
  • Schema-defined: Each tool has a JSON Schema for its parameters
  • Described: Each tool has a name and natural-language description the model uses to understand its purpose
  • Side-effect capable: Tools can modify state (create files, send messages, write to databases)

Resources (Application-Controlled)

Resources are data that the application can expose to the AI. Unlike tools, resources are not called by the model -- they are read by the application and included in the context.

// Example: A resource exposing database schema
server.resource(
  "schema://main",
  "Main database schema with all table definitions",
  async () => ({
    contents: [{
      uri: "schema://main",
      mimeType: "text/plain",
      text: await getDatabaseSchema(),
    }],
  })
);

Key characteristics of resources:

  • Application-controlled: The host app decides when to read them (often at startup or on user request)
  • URI-addressed: Each resource has a unique URI for identification
  • Read-only: Resources provide data but do not modify anything
  • Contextual: They give the AI background information to make better decisions

Prompts (User-Controlled)

Prompts are reusable templates for common workflows. They help users invoke specific patterns without writing the full prompt each time.

// Example: A prompt template for code review
server.prompt(
  "code-review",
  "Review code changes with a structured checklist",
  {
    diff: z.string().describe("The diff or code changes to review"),
    language: z.string().default("typescript").describe("Programming language"),
  },
  async ({ diff, language }) => ({
    messages: [{
      role: "user",
      content: {
        type: "text",
        text: `Please review the following ${language} code changes using this checklist:
1. Correctness: Are there any bugs?
2. Security: Any vulnerabilities?
3. Performance: Any inefficiencies?
4. Readability: Is the code clear?
5. Testing: Are tests adequate?

\`\`\`diff
${diff}
\`\`\``,
      },
    }],
  })
);

Key characteristics of prompts:

  • User-controlled: Users explicitly select which prompt template to use
  • Parameterized: Templates accept arguments to customize the generated prompt
  • Reusable: They encode best practices into shareable templates

Anatomy of an MCP Server

Minimal Server Structure

Here is a complete, minimal MCP server in both TypeScript and Python:

TypeScript (using the official SDK):

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

// 1. Create the server
const server = new McpServer({
  name: "example-server",
  version: "1.0.0",
});

// 2. Register tools
server.tool(
  "get_weather",
  "Get the current weather for a city",
  {
    city: z.string().describe("City name"),
    units: z.enum(["celsius", "fahrenheit"]).default("celsius"),
  },
  async ({ city, units }) => {
    // In a real server, this would call a weather API
    const temp = units === "celsius" ? "22°C" : "72°F";
    return {
      content: [{
        type: "text",
        text: `Weather in ${city}: ${temp}, partly cloudy`,
      }],
    };
  }
);

// 3. Register resources
server.resource(
  "config://app",
  "Application configuration",
  async () => ({
    contents: [{
      uri: "config://app",
      mimeType: "application/json",
      text: JSON.stringify({ version: "1.0", environment: "production" }),
    }],
  })
);

// 4. Connect transport and start
const transport = new StdioServerTransport();
await server.connect(transport);

Python (using the FastMCP helper):

from mcp.server.fastmcp import FastMCP

# 1. Create the server
mcp = FastMCP("example-server")

# 2. Register tools
@mcp.tool()
def get_weather(city: str, units: str = "celsius") -> str:
    """Get the current weather for a city.

    Args:
        city: City name
        units: Temperature units (celsius or fahrenheit)
    """
    temp = "22°C" if units == "celsius" else "72°F"
    return f"Weather in {city}: {temp}, partly cloudy"

# 3. Register resources
@mcp.resource("config://app")
def get_config() -> str:
    """Application configuration."""
    return '{"version": "1.0", "environment": "production"}'

# 4. Start the server
if __name__ == "__main__":
    mcp.run()

Server Configuration in Host Applications

Once a server is built (or you are using a pre-built one), you configure it in your host application. Here is how it looks in Claude Desktop:

{
  "mcpServers": {
    "weather": {
      "command": "node",
      "args": ["path/to/weather-server/index.js"]
    },
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_your_token_here"
      }
    },
    "postgres": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-postgres", "postgresql://localhost/mydb"]
    }
  }
}

Each entry specifies:

  • A name (used for identification in the UI and logs)
  • A command to run the server
  • Arguments passed to the command
  • Environment variables (optional, for API keys and configuration)

Popular MCP Server Examples

The MCP ecosystem has grown rapidly since the protocol's launch. Here are some of the most widely used servers, organized by category. You can explore the full directory at MCP Server Spot.

Development Tools

ServerDescriptionKey Tools
GitHub (@modelcontextprotocol/server-github)Full GitHub integrationCreate issues, PRs, search repos, manage branches
GitLabGitLab API accessMerge requests, pipelines, issues
Filesystem (@modelcontextprotocol/server-filesystem)Local file accessRead, write, search, move files
Git (@modelcontextprotocol/server-git)Git operationsStatus, diff, commit, log, branch

Databases

ServerDescriptionKey Tools
PostgreSQL (@modelcontextprotocol/server-postgres)PostgreSQL accessRun queries, inspect schema
SQLite (@modelcontextprotocol/server-sqlite)SQLite database accessQuery, analyze, create tables
MongoDBMongoDB document accessFind, aggregate, insert, update
RedisRedis key-value storeGet, set, search, manage keys

Web & Browser

ServerDescriptionKey Tools
Puppeteer (@modelcontextprotocol/server-puppeteer)Browser automationNavigate, screenshot, click, extract
PlaywrightAdvanced browser automationMulti-browser testing, scraping
Fetch (@modelcontextprotocol/server-fetch)Web content fetchingFetch URLs, extract text/markdown
Brave Search (@modelcontextprotocol/server-brave-search)Web searchSearch the web, get snippets

Productivity

ServerDescriptionKey Tools
Slack (@modelcontextprotocol/server-slack)Slack integrationSend messages, search channels, manage threads
Google Drive (@modelcontextprotocol/server-gdrive)Google Drive accessSearch, read, create files
NotionNotion workspace accessSearch pages, create/edit content, manage databases
LinearLinear project managementCreate/update issues, manage projects

Cloud & Infrastructure

ServerDescriptionKey Tools
AWS KB Retrieval (@modelcontextprotocol/server-aws-kb-retrieval)AWS Bedrock knowledge basesQuery knowledge bases, retrieve documents
CloudflareCloudflare managementWorkers, DNS, analytics
DockerContainer managementList, start, stop, inspect containers
KubernetesK8s cluster managementPods, deployments, services, logs

How the AI Model Interacts with MCP Tools

Understanding the interaction pattern is crucial for both using and building MCP servers effectively.

Tool Discovery

When a client connects, it asks the server what is available:

// Client request
{"jsonrpc": "2.0", "id": 1, "method": "tools/list"}

// Server response
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "tools": [
      {
        "name": "search_code",
        "description": "Search for code patterns in the repository",
        "inputSchema": {
          "type": "object",
          "properties": {
            "query": {"type": "string", "description": "The search pattern"},
            "fileType": {"type": "string", "description": "File extension filter"}
          },
          "required": ["query"]
        }
      },
      {
        "name": "read_file",
        "description": "Read the contents of a file",
        "inputSchema": {
          "type": "object",
          "properties": {
            "path": {"type": "string", "description": "Path to the file"}
          },
          "required": ["path"]
        }
      }
    ]
  }
}

These tool descriptions are added to the AI model's context, giving it a "menu" of available actions.

Tool Invocation

When the model decides to use a tool, the flow works like this:

  1. The AI model generates a tool call in its response
  2. The host application intercepts this and routes it through the MCP client
  3. The client sends a tools/call JSON-RPC request to the appropriate server
  4. The server executes the tool and returns the result
  5. The result is fed back to the model as context for its next response
// Client sends tool call
{
  "jsonrpc": "2.0",
  "id": 2,
  "method": "tools/call",
  "params": {
    "name": "search_code",
    "arguments": {
      "query": "handleAuth",
      "fileType": ".ts"
    }
  }
}

// Server returns result
{
  "jsonrpc": "2.0",
  "id": 2,
  "result": {
    "content": [
      {
        "type": "text",
        "text": "Found 5 matches for 'handleAuth' in .ts files:\n\n1. src/auth/handler.ts:24 - export function handleAuth(req: Request)\n2. src/auth/handler.ts:45 - async function handleAuthCallback(code: string)\n3. src/middleware/auth.ts:12 - import { handleAuth } from '../auth/handler'\n4. src/routes/login.ts:8 - import { handleAuth } from '../auth/handler'\n5. tests/auth.test.ts:15 - describe('handleAuth', () => {"
      }
    ]
  }
}

Multi-Tool Orchestration

In practice, AI models often chain multiple tool calls to accomplish a task. For example:

User: "Find and fix the broken test in the auth module"

Model reasoning:
1. First, I'll search for test files in the auth module
   → calls search_code("test", fileType=".test.ts", path="src/auth")
2. I see there's a failing test. Let me read the test file
   → calls read_file("src/auth/handler.test.ts")
3. And the source file it tests
   → calls read_file("src/auth/handler.ts")
4. I can see the bug. Let me fix it
   → calls write_file("src/auth/handler.ts", updatedContent)
5. Now let me verify the fix by running the test
   → calls run_command("npm test -- --grep 'handleAuth'")

This multi-step reasoning is what makes MCP so powerful. The AI is not just calling one API -- it is orchestrating a workflow across multiple tools, adapting its strategy based on intermediate results.


Building Your Own MCP Server

When to Build a Custom Server

You should build a custom MCP server when:

  • Your tool or service does not have an existing MCP server -- Check the MCP Server Directory first
  • You need custom logic -- Your use case requires specific business rules or data transformations
  • You want to wrap internal APIs -- Your organization has proprietary services that need AI access
  • You need combined capabilities -- You want a single server that aggregates multiple related tools

Server Development Best Practices

1. Write clear tool descriptions. The AI model decides which tool to use based on descriptions alone. Be specific and include edge cases:

# Bad: vague description
@mcp.tool()
def search(q: str) -> str:
    """Search for things."""
    ...

# Good: specific description with usage guidance
@mcp.tool()
def search_issues(
    query: str,
    state: str = "open",
    labels: list[str] | None = None
) -> str:
    """Search GitHub issues in the current repository.

    Use this tool when the user wants to find issues by keyword,
    filter by state (open/closed), or search by labels.
    Returns issue number, title, state, and assignee for each match.

    Args:
        query: Search keywords to match against issue titles and bodies
        state: Filter by issue state - 'open', 'closed', or 'all'
        labels: Optional list of label names to filter by
    """
    ...

2. Return structured, useful results. Do not dump raw API responses. Format the output so the AI model can easily parse and reason about it:

# Bad: raw API dump
return json.dumps(api_response)

# Good: structured, readable output
return f"""Found {len(results)} issues matching '{query}':

{chr(10).join(f"#{issue['number']} - {issue['title']} ({issue['state']})"
              for issue in results)}

Total open: {sum(1 for i in results if i['state'] == 'open')}
Total closed: {sum(1 for i in results if i['state'] == 'closed')}"""

3. Handle errors gracefully. Return clear error messages that help the AI model recover:

server.tool("query_database", "Run a SQL query", { sql: z.string() },
  async ({ sql }) => {
    try {
      const results = await db.query(sql);
      return { content: [{ type: "text", text: formatResults(results) }] };
    } catch (error) {
      return {
        content: [{
          type: "text",
          text: `Database query failed: ${error.message}\n\nSuggestion: Check that table and column names are correct. Use the 'list_tables' tool to see available tables.`,
        }],
        isError: true,
      };
    }
  }
);

4. Implement proper security. Validate all inputs, limit permissions, and never expose sensitive data:

import os
from pathlib import Path

ALLOWED_DIR = Path(os.environ.get("ALLOWED_DIR", "/home/user/projects"))

@mcp.tool()
def read_file(path: str) -> str:
    """Read a file from the allowed project directory."""
    resolved = (ALLOWED_DIR / path).resolve()

    # Security: prevent path traversal attacks
    if not str(resolved).startswith(str(ALLOWED_DIR)):
        raise ValueError(f"Access denied: path must be within {ALLOWED_DIR}")

    if not resolved.exists():
        raise FileNotFoundError(f"File not found: {path}")

    return resolved.read_text()

For a complete guide to MCP security, see MCP Security Model.


Local vs Remote MCP Servers

MCP servers can run in two fundamentally different modes, each with distinct trade-offs. For a deep dive, see Local vs Remote MCP Servers.

Local Servers (stdio Transport)

  • Run as child processes on the user's machine
  • Communicate through stdin/stdout
  • No network configuration required
  • Inherit the user's filesystem permissions
  • Ideal for: file access, local tools, development workflows

Remote Servers (HTTP/SSE Transport)

  • Run on a remote machine or cloud service
  • Communicate over HTTP with Server-Sent Events
  • Require authentication (OAuth 2.1)
  • Can be shared across teams and organizations
  • Ideal for: SaaS integrations, shared databases, enterprise deployments
FactorLocal (stdio)Remote (HTTP/SSE)
SetupSimple (command + args)Requires URL, auth config
LatencyVery low (in-process)Network-dependent
SecurityUser's local permissionsOAuth 2.1, TLS
SharingSingle userMulti-user, multi-org
StateProcess lifetimeServer lifetime
ScalingSingle machineCloud-scalable

The MCP Server Lifecycle

Initialization

Every MCP server goes through a startup sequence:

  1. The host application launches the server process (or connects to a remote URL)
  2. The client sends initialize with its protocol version and capabilities
  3. The server responds with its protocol version, name, and supported capabilities
  4. The client sends initialized to confirm the handshake is complete

Normal Operation

During normal operation, the server handles requests as they arrive:

  • tools/list -- Return the catalog of available tools
  • tools/call -- Execute a specific tool with provided arguments
  • resources/list -- Return available resources
  • resources/read -- Read a specific resource
  • prompts/list -- Return available prompt templates
  • prompts/get -- Get a specific prompt with arguments filled in

Notifications

MCP supports asynchronous notifications in both directions:

  • Server to client: Tool list changed, resource updated, log messages
  • Client to server: Roots changed (working directories updated)

Shutdown

When the host application closes or the user disconnects, the server receives a shutdown signal and can clean up resources (close database connections, save state, etc.).


Common Patterns and Architectures

Single-Purpose Servers

Most MCP servers follow the single-purpose pattern -- one server wraps one external system:

  • GitHub server wraps the GitHub API
  • PostgreSQL server wraps a PostgreSQL database
  • Filesystem server wraps local file operations

This pattern keeps servers simple, focused, and composable.

Aggregator Servers

Some servers aggregate multiple related tools into a single interface:

  • A "dev tools" server that combines Git, Docker, and test running
  • A "productivity" server that combines email, calendar, and task management
  • A "data analysis" server that combines SQL, pandas, and chart generation

Gateway Servers

In enterprise settings, a gateway server can act as a proxy, routing requests to internal services while handling authentication, rate limiting, and audit logging centrally:

MCP Client → Gateway MCP Server → Internal Service A
                                → Internal Service B
                                → Internal Service C

For more on advanced architectures, see Composability in MCP.


Finding and Installing MCP Servers

Discovery

The best way to find MCP servers for your needs:

  1. MCP Server Spot directory -- Our curated, searchable catalog of MCP servers organized by category
  2. GitHub -- Search for repositories tagged with "mcp-server"
  3. npm -- Search for packages in the @modelcontextprotocol scope
  4. PyPI -- Search for Python MCP server packages
  5. Official MCP repository -- github.com/modelcontextprotocol/servers for reference implementations

Installation

Most MCP servers can be installed and configured in minutes:

For npm-based servers:

# No global install needed -- npx handles it
# Just add to your Claude Desktop config:
{
  "mcpServers": {
    "github": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "your-token"
      }
    }
  }
}

For Python-based servers:

# Install with pip or uvx
pip install mcp-server-fetch

# Or use uvx in your config for automatic management:
{
  "mcpServers": {
    "fetch": {
      "command": "uvx",
      "args": ["mcp-server-fetch"]
    }
  }
}

Summary

MCP servers are the building blocks of AI-tool integration. They wrap external systems in a standardized protocol, giving AI models the ability to search, create, modify, and orchestrate across any connected service. Whether you are using pre-built servers from the MCP directory or building custom ones for your organization, MCP servers are how AI applications connect to the real world.

Next steps:

Frequently Asked Questions

What is an MCP server?

An MCP server is a lightweight program that exposes tools, data resources, and prompt templates to AI applications through the Model Context Protocol. It acts as a standardized wrapper around any external system — APIs, databases, file systems, or services — making them accessible to AI models.

How is an MCP server different from a regular API server?

Unlike regular API servers, MCP servers speak a standardized protocol (JSON-RPC 2.0) designed specifically for AI interaction. They include dynamic capability discovery (the AI can ask what tools are available), structured schemas for every tool, and bidirectional communication. A traditional API requires the developer to write custom integration code for each AI app.

What are the three primitives an MCP server can expose?

MCP servers can expose three types of primitives: Tools (functions the AI model can call), Resources (data the application can read), and Prompts (reusable templates for common workflows). Not every server needs to implement all three.

Can one AI app connect to multiple MCP servers at once?

Yes. An AI host application like Claude Desktop or Cursor can connect to many MCP servers simultaneously. Each connection is managed by a separate MCP client instance within the host. This allows a single AI assistant to access GitHub, databases, filesystems, and other tools all at once.

Do I need to write code to use an MCP server?

No. Many MCP servers are available as pre-built packages on npm or PyPI. You can install and configure them in applications like Claude Desktop, Cursor, or VS Code by adding a few lines to a JSON configuration file. Building a new MCP server requires code, but using existing ones typically does not.

How does an AI model decide which MCP tool to use?

When an MCP client connects to servers, it collects the list of available tools with their names, descriptions, and parameter schemas. These are included in the AI model's context. The model then uses its reasoning abilities to decide which tool to call based on the user's request, similar to how a person decides which app to open.

Are MCP servers secure?

MCP servers can be secure when properly implemented. Local servers (stdio transport) run on the user's machine with the user's permissions. Remote servers support OAuth 2.1 authentication, TLS encryption, and permission-based consent flows. However, users should only install MCP servers from trusted sources and review the permissions they grant.

What programming languages can I use to build an MCP server?

Official SDKs are available for Python, TypeScript/JavaScript, Java, Kotlin, C#, and Swift. Community SDKs exist for Go, Rust, Ruby, and other languages. The protocol itself is language-agnostic — any language that can handle JSON-RPC 2.0 over stdio or HTTP can implement an MCP server.

Related Guides