Building MCP Servers
Pillar Guide

Testing & Debugging MCP Servers (Inspector Tools Guide)

Master MCP server testing with the MCP Inspector, debugging techniques, logging best practices, and automated testing strategies.

22 min read
Updated February 25, 2026
By MCP Server Spot

Testing and debugging MCP servers requires different strategies than typical application testing. MCP servers communicate via JSON-RPC over stdio or SSE transports, which means standard debugging approaches like print statements and interactive debuggers need adaptation. The good news: the MCP ecosystem provides excellent tooling, starting with the MCP Inspector.

This guide covers the complete testing and debugging workflow: interactive testing with the Inspector, logging strategies that work with stdio transport, automated unit and integration tests, debugging common errors, and monitoring production servers.

The MCP Inspector: Your Primary Debug Tool

The MCP Inspector is a web-based tool that connects to your MCP server and provides an interactive UI for testing every capability. It is the single most important tool in your MCP development workflow.

Launching the Inspector

For Python servers (using the MCP CLI):

mcp dev server.py

For TypeScript servers (using npx):

npx @modelcontextprotocol/inspector node dist/index.js

Or with tsx for TypeScript source files:

npx @modelcontextprotocol/inspector tsx src/index.ts

Both commands start your MCP server and open a web UI at http://localhost:5173.

Inspector Interface Overview

The Inspector provides several key panels:

PanelPurpose
ToolsList, inspect, and call registered tools with parameter inputs
ResourcesBrowse available resources, read their contents
PromptsView and test prompt templates with arguments
MessagesView raw JSON-RPC messages (request/response pairs)
NotificationsSee server-sent notifications (resource changes, logs, progress)

Testing Tools in the Inspector

  1. Navigate to the Tools panel
  2. Select a tool from the list
  3. Fill in the input parameters using the auto-generated form
  4. Click Run to execute the tool call
  5. Examine the response in the output panel

The Inspector shows both the formatted response and the raw JSON-RPC message, making it easy to see exactly what your server returns.

Testing Edge Cases

Use the Inspector to systematically test edge cases:

  • Missing required parameters -- What happens when you leave a required field empty?
  • Invalid types -- What if you send a string where a number is expected?
  • Boundary values -- Zero, negative numbers, extremely long strings
  • Empty results -- Search queries that match nothing
  • Error conditions -- Invalid IDs, unavailable services

The Inspector lets you send any JSON as tool arguments, so you can test malformed inputs that an AI model would not normally send.

Inspecting Raw Protocol Messages

The Messages panel is invaluable for debugging protocol issues. It shows:

// Request from Inspector (client) to your server
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call",
  "params": {
    "name": "get_weather",
    "arguments": {
      "city": "San Francisco"
    }
  }
}

// Response from your server
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "content": [
      {
        "type": "text",
        "text": "Temperature: 62°F, Condition: Partly cloudy"
      }
    ]
  }
}

If your server returns malformed responses, the Messages panel shows exactly what went wrong.

Logging Strategies for MCP Servers

Standard logging is complicated by the stdio transport: stdout is reserved for JSON-RPC messages. Any log output to stdout corrupts the protocol stream and crashes the connection.

The Golden Rule: Use stderr

Python:

import logging
import sys

# Configure logging to stderr
logging.basicConfig(
    level=logging.DEBUG,
    format="%(asctime)s [%(levelname)s] %(name)s: %(message)s",
    stream=sys.stderr,
)
logger = logging.getLogger("my-mcp-server")

@mcp.tool()
async def my_tool(query: str) -> str:
    logger.info(f"Tool called with query: {query}")
    logger.debug(f"Processing query of length {len(query)}")
    # ...
    logger.info("Tool completed successfully")
    return result

TypeScript:

// console.error goes to stderr (safe for MCP)
// console.log goes to stdout (NOT safe for MCP)

function log(level: string, message: string, data?: unknown) {
  const timestamp = new Date().toISOString();
  const entry = { timestamp, level, message, ...data && { data } };
  console.error(JSON.stringify(entry));
}

// Usage
log("info", "Tool called", { tool: "search", query });
log("error", "Database connection failed", { error: err.message });

MCP Protocol-Level Logging

The MCP protocol supports server-to-client log notifications that appear in the client's UI:

# Python: Send log messages through the protocol
@mcp.tool()
async def complex_operation(data: str) -> str:
    # This sends a log notification to the client
    await mcp.server.request_context.session.send_log_message(
        level="info",
        data="Starting complex operation...",
    )

    result = await process_data(data)

    await mcp.server.request_context.session.send_log_message(
        level="info",
        data=f"Operation complete. Processed {len(result)} items.",
    )

    return result
// TypeScript: Send log notifications
server.sendLoggingMessage({
  level: "info",
  data: "Processing started",
});

These log messages appear in both the MCP Inspector and supported clients like Claude Desktop (in developer mode).

Structured Logging for Production

For production servers, use structured logging that can be parsed by log aggregation tools:

import json
import sys
from datetime import datetime

class MCPLogger:
    def __init__(self, name: str):
        self.name = name

    def _log(self, level: str, message: str, **kwargs):
        entry = {
            "timestamp": datetime.utcnow().isoformat(),
            "level": level,
            "server": self.name,
            "message": message,
            **kwargs,
        }
        print(json.dumps(entry), file=sys.stderr)

    def info(self, message: str, **kwargs):
        self._log("INFO", message, **kwargs)

    def error(self, message: str, **kwargs):
        self._log("ERROR", message, **kwargs)

    def debug(self, message: str, **kwargs):
        self._log("DEBUG", message, **kwargs)

logger = MCPLogger("weather-server")
logger.info("Server starting", version="1.0.0")
logger.error("API request failed", url=url, status_code=500)

Debugging Common Errors

Error: Server Not Starting

Symptom: Claude Desktop shows "Server failed to start" or the Inspector cannot connect.

Diagnosis:

# Test the server directly
python server.py
# or
node dist/index.js

If you see errors, fix them before trying the Inspector or Claude Desktop.

Common causes:

  • Syntax errors in your server file
  • Missing dependencies (run uv sync or npm install)
  • Wrong Python/Node version
  • Port conflict (for SSE servers)

Error: Tools Not Appearing

Symptom: Server starts but tools do not show up in the client.

Diagnosis checklist:

  1. Capabilities declared? Make sure your server declares tool support:
// TypeScript: capabilities must include tools
const server = new Server(info, {
  capabilities: {
    tools: {},  // This must be present
  },
});
# Python FastMCP: tools are enabled automatically when you use @mcp.tool()
# No manual configuration needed
  1. Handler registered? Ensure you have both ListToolsRequestSchema and CallToolRequestSchema handlers.

  2. Server restarted? Claude Desktop caches server capabilities. Fully restart (quit and reopen) after changes.

Error: Tool Returns Empty or Null

Symptom: Tool executes but the AI receives no content.

Common cause: Your tool function returns None instead of a string.

# Bug: function has no return statement on one code path
@mcp.tool()
async def fetch_data(id: str) -> str:
    data = await api.get(id)
    if data:
        return format_data(data)
    # BUG: no return here — Python returns None

# Fix: always return something
@mcp.tool()
async def fetch_data(id: str) -> str:
    data = await api.get(id)
    if data:
        return format_data(data)
    return f"No data found for ID '{id}'."

Error: Connection Drops Mid-Conversation

Symptom: Server works initially but disconnects during use.

Common causes:

  1. Unhandled exception crashing the server process:
# Wrap tool handlers in try/except
@mcp.tool()
async def risky_tool(input: str) -> str:
    try:
        return await do_something_risky(input)
    except Exception as e:
        logger.error(f"Tool error: {e}")
        return f"An error occurred: {str(e)}"
  1. Memory leak causing the process to be killed:
# Bad: accumulating data without limits
all_results = []  # grows forever

# Good: bounded data structures
from collections import deque
recent_results = deque(maxlen=1000)
  1. Stdout pollution from a dependency:
# Some libraries print to stdout. Redirect before importing.
import sys
import io

# Capture any stdout from libraries
_original_stdout = sys.stdout
sys.stdout = sys.stderr  # Redirect stdout to stderr

# Now import libraries that might print
import some_chatty_library

# Restore if needed for non-MCP code
# sys.stdout = _original_stdout

Error: "Could not connect to MCP server" in Claude Desktop

Diagnosis steps:

  1. Check the config path:

    • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
    • Windows: %APPDATA%\Claude\claude_desktop_config.json
  2. Validate JSON syntax:

# Check for JSON syntax errors
python -m json.tool ~/Library/Application\ Support/Claude/claude_desktop_config.json
  1. Verify the command works manually:
# Try running exactly what Claude Desktop would run
uv --directory /path/to/server run server.py
  1. Check Claude Desktop logs:
    • macOS: ~/Library/Logs/Claude/
    • Look for the most recent log file with MCP-related error messages

Automated Testing Strategies

Unit Testing Tool Logic

Extract business logic from tool handlers and test it independently:

# server.py
from weather_service import format_alerts, parse_forecast

@mcp.tool()
async def get_alerts(state: str) -> str:
    data = await fetch_alerts(state)
    return format_alerts(data)

# test_weather_service.py
import pytest
from weather_service import format_alerts

def test_format_alerts_empty():
    result = format_alerts([])
    assert result == "No active weather alerts for this area."

def test_format_alerts_single():
    alerts = [{
        "properties": {
            "event": "Winter Storm Warning",
            "areaDesc": "Northern California",
            "severity": "Severe",
            "description": "Heavy snow expected",
            "instruction": "Stay indoors",
        }
    }]
    result = format_alerts(alerts)
    assert "Winter Storm Warning" in result
    assert "Severe" in result

def test_format_alerts_missing_fields():
    alerts = [{"properties": {}}]
    result = format_alerts(alerts)
    assert "Unknown" in result

Integration Testing with In-Memory Transport

Test the full MCP protocol stack without stdio:

Python:

import pytest
from mcp import ClientSession
from mcp.client.session import InMemoryTransport

from server import mcp as my_server

@pytest.mark.asyncio
async def test_get_alerts_tool():
    # Create in-memory client-server pair
    async with InMemoryTransport() as (client_transport, server_transport):
        # Start server
        server_task = asyncio.create_task(
            my_server._mcp_server.run(
                server_transport[0],
                server_transport[1],
                my_server._mcp_server.create_initialization_options(),
            )
        )

        # Create client session
        async with ClientSession(
            client_transport[0], client_transport[1]
        ) as session:
            await session.initialize()

            # List tools
            tools = await session.list_tools()
            tool_names = [t.name for t in tools.tools]
            assert "get_alerts" in tool_names

            # Call a tool
            result = await session.call_tool("get_alerts", {"state": "CA"})
            assert result.content
            assert len(result.content) > 0

        server_task.cancel()

TypeScript:

import { describe, it, expect } from "vitest";
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { InMemoryTransport } from "@modelcontextprotocol/sdk/inMemory.js";
import { createServer } from "./server.js";

describe("Notes MCP Server", () => {
  it("should list tools", async () => {
    const server = createServer();
    const client = new Client({ name: "test-client", version: "1.0" });

    const [clientTransport, serverTransport] =
      InMemoryTransport.createLinkedPair();

    await Promise.all([
      server.connect(serverTransport),
      client.connect(clientTransport),
    ]);

    const tools = await client.listTools();
    expect(tools.tools.map((t) => t.name)).toContain("add_note");
    expect(tools.tools.map((t) => t.name)).toContain("search_notes");

    await client.close();
    await server.close();
  });

  it("should add and search notes", async () => {
    const server = createServer();
    const client = new Client({ name: "test-client", version: "1.0" });

    const [clientTransport, serverTransport] =
      InMemoryTransport.createLinkedPair();

    await Promise.all([
      server.connect(serverTransport),
      client.connect(clientTransport),
    ]);

    // Add a note
    const addResult = await client.callTool({
      name: "add_note",
      arguments: { title: "Test Note", content: "Hello world" },
    });
    expect(addResult.content[0].text).toContain("created successfully");

    // Search for it
    const searchResult = await client.callTool({
      name: "search_notes",
      arguments: { query: "Hello" },
    });
    expect(searchResult.content[0].text).toContain("Test Note");

    await client.close();
    await server.close();
  });
});

Snapshot Testing for Tool Responses

Snapshot tests catch unexpected changes in tool output:

import { describe, it, expect } from "vitest";

describe("Tool Response Snapshots", () => {
  it("should format alerts consistently", () => {
    const alerts = [
      {
        properties: {
          event: "Heat Advisory",
          areaDesc: "Phoenix Metro",
          severity: "Moderate",
          description: "Temperatures expected to reach 115°F",
          instruction: "Drink plenty of water",
        },
      },
    ];

    const result = formatAlerts(alerts);
    expect(result).toMatchSnapshot();
  });
});

Mocking External Services

Use mocks to test tools without hitting real APIs:

Python with pytest and respx:

import respx
import httpx
import pytest

@respx.mock
@pytest.mark.asyncio
async def test_get_forecast():
    # Mock the NWS points endpoint
    respx.get("https://api.weather.gov/points/37.7749,-122.4194").mock(
        return_value=httpx.Response(200, json={
            "properties": {
                "forecast": "https://api.weather.gov/gridpoints/MTR/85,105/forecast"
            }
        })
    )

    # Mock the forecast endpoint
    respx.get(
        "https://api.weather.gov/gridpoints/MTR/85,105/forecast"
    ).mock(
        return_value=httpx.Response(200, json={
            "properties": {
                "periods": [{
                    "name": "Today",
                    "temperature": 65,
                    "temperatureUnit": "F",
                    "windSpeed": "10 mph",
                    "windDirection": "W",
                    "detailedForecast": "Sunny with mild temperatures.",
                }]
            }
        })
    )

    from server import get_forecast
    result = await get_forecast(37.7749, -122.4194)
    assert "65°F" in result
    assert "Sunny" in result

TypeScript with nock:

import nock from "nock";

it("should fetch weather data", async () => {
  nock("https://api.weather.gov")
    .get("/points/37.7749,-122.4194")
    .reply(200, {
      properties: {
        forecast: "https://api.weather.gov/gridpoints/MTR/85,105/forecast",
      },
    });

  nock("https://api.weather.gov")
    .get("/gridpoints/MTR/85,105/forecast")
    .reply(200, {
      properties: {
        periods: [
          {
            name: "Today",
            temperature: 65,
            temperatureUnit: "F",
            windSpeed: "10 mph",
            windDirection: "W",
            detailedForecast: "Sunny.",
          },
        ],
      },
    });

  const result = await client.callTool({
    name: "get_forecast",
    arguments: { latitude: 37.7749, longitude: -122.4194 },
  });

  expect(result.content[0].text).toContain("65°F");
});

Debugging SSE Transport Servers

Remote MCP servers using SSE (Server-Sent Events) transport have additional debugging considerations.

Testing SSE Connections

# Test the SSE endpoint directly with curl
curl -N http://localhost:3001/sse

# You should see the SSE stream with an endpoint message:
# event: endpoint
# data: /messages?sessionId=abc123

Common SSE Issues

IssueSymptomFix
CORS errorsBrowser client cannot connectAdd CORS headers to your SSE endpoint
Proxy bufferingMessages arrive in batchesDisable proxy buffering (X-Accel-Buffering: no)
Connection timeoutDrops after 60 secondsSend periodic heartbeat events
SSL mismatchConnection refusedEnsure HTTPS is configured correctly

SSE Heartbeat Implementation

// Send periodic heartbeats to keep the connection alive
setInterval(() => {
  res.write(": heartbeat\n\n");
}, 30000); // Every 30 seconds

CI/CD Integration

GitHub Actions Example

name: Test MCP Server
on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: "3.12"

      - name: Install uv
        uses: astral-sh/setup-uv@v3

      - name: Install dependencies
        run: uv sync

      - name: Run unit tests
        run: uv run pytest tests/unit/ -v

      - name: Run integration tests
        run: uv run pytest tests/integration/ -v

      - name: Type checking
        run: uv run mypy src/

Pre-Commit Validation

Add a script that validates your server starts correctly:

#!/bin/bash
# scripts/validate-server.sh
# Starts the server, waits for initialization, then shuts down

timeout 10 python server.py &
SERVER_PID=$!

sleep 2

if kill -0 $SERVER_PID 2>/dev/null; then
    echo "Server started successfully"
    kill $SERVER_PID
    exit 0
else
    echo "Server failed to start"
    exit 1
fi

What to Read Next

Summary

Testing and debugging MCP servers effectively requires three complementary approaches. Interactive testing with the MCP Inspector gives you immediate feedback during development. Automated testing with unit tests and integration tests using in-memory transports catches regressions and validates edge cases. Logging to stderr with structured output provides visibility into production behavior.

The most common debugging challenges -- stdout pollution, missing capabilities declarations, unhandled exceptions, and environment mismatches between development and Claude Desktop -- all have straightforward solutions once you know what to look for. Start every debug session with the MCP Inspector, check the Messages panel for protocol-level issues, and use the diagnostic checklist in this guide when something goes wrong.

Frequently Asked Questions

What is the MCP Inspector and how do I use it?

The MCP Inspector is a web-based debugging tool that lets you interactively test MCP servers. For Python servers, run 'mcp dev server.py'. For TypeScript servers, run 'npx @modelcontextprotocol/inspector node dist/index.js'. It opens a browser UI at localhost:5173 where you can list and call tools, browse resources, and test prompts.

Why is my MCP server crashing immediately when started?

The most common cause is using print() or console.log() in a stdio-based server. These output to stdout, which is reserved for JSON-RPC protocol messages. Any non-protocol output corrupts the message stream. Use stderr for logging: logging to sys.stderr in Python, or console.error in TypeScript.

How do I see the raw JSON-RPC messages between client and server?

The MCP Inspector has a 'Messages' panel that shows raw JSON-RPC request/response pairs. For Claude Desktop, check the developer logs at ~/Library/Logs/Claude/ (macOS). You can also wrap your server's transport to log messages to a file.

Can I write unit tests for MCP tool handlers?

Yes. Extract your business logic into separate functions that you test independently of the MCP framework. For integration tests, both SDKs support creating in-memory client-server pairs using InMemoryTransport, allowing you to write tests that call tools through the full MCP protocol stack.

How do I debug a Python MCP server with a debugger like VS Code?

Start your server with the debugpy library for remote debugging: add 'import debugpy; debugpy.listen(5678)' at the top of your server file. Then attach VS Code's debugger to port 5678. Since MCP servers use stdio, you cannot use standard VS Code launch configurations that capture stdout.

My tool works in the Inspector but fails in Claude Desktop. What's wrong?

Common causes: (1) different working directory — use absolute paths, (2) missing environment variables — add them to the 'env' section in claude_desktop_config.json, (3) different Python/Node version — check which runtime Claude Desktop is using, (4) permission issues — Claude Desktop may run with different user permissions.

How do I handle timeouts in MCP tool testing?

MCP clients have built-in timeout settings. For tools that take a long time, send progress notifications to keep the connection alive. In testing, you can configure longer timeouts in the Inspector. For production, implement timeout handling in your tool code and return a meaningful message if an operation takes too long.

What logging framework should I use for MCP servers?

For Python, use the standard logging module configured to output to stderr. For TypeScript, use console.error or a library like pino configured for stderr. The MCP protocol also supports server-initiated log notifications using the notifications/message method, which appear in the client's UI.

How do I test MCP servers that require authentication or API keys?

For the MCP Inspector, set environment variables before launching: 'API_KEY=xxx mcp dev server.py'. For automated tests, use mock/stub services instead of real APIs. Tools like nock (TypeScript) or responses (Python) can intercept HTTP requests and return test data.

How do I run integration tests against my MCP server in CI/CD?

Use the SDK's in-memory transport to create client-server pairs in your test suite. This avoids the need for stdio and works cleanly in CI environments. Alternatively, start your server as a subprocess with stdio, pipe messages to it, and assert on responses. Both approaches work with standard testing frameworks (pytest, Jest/Vitest).

Related Articles

Related Guides