Clients & Integrations
Pillar Guide

Integrating MCP with ChatGPT, Perplexity & Open-Source LLMs

How to use MCP servers with ChatGPT, Perplexity, open-source models like LLaMA, and other AI platforms beyond Claude.

22 min read
Updated February 25, 2026
By MCP Server Spot

MCP (Model Context Protocol) was designed as an open, provider-agnostic protocol. While Anthropic created it and Claude Desktop has the most mature integration, the protocol specification is open-source and any AI platform can implement support. This guide covers how to use MCP servers with ChatGPT, Perplexity, open-source LLMs like LLaMA and Mistral, and third-party AI platforms.

The core promise of MCP is: build a server once, use it with any compatible client. As more platforms adopt MCP, the servers you build today become more valuable over time.

The MCP Multi-Provider Landscape

The MCP ecosystem spans three tiers of integration:

TierPlatformsIntegration Type
NativeClaude Desktop, Claude Code, Cursor, VS CodeBuilt-in MCP client, first-class support
OfficialChatGPT (OpenAI)Platform-level MCP support
CommunityLLaMA, Mistral, GPT-4 (via API), other LLMsThird-party MCP clients and bridges

Regardless of the tier, the MCP server side is identical. Your server does not know or care which AI model is calling its tools.

ChatGPT and MCP

OpenAI announced support for the Model Context Protocol in 2025, recognizing MCP as an emerging standard for AI-tool integration.

How ChatGPT Uses MCP

ChatGPT's MCP integration allows users to connect remote MCP servers that provide tools for ChatGPT to use in conversations. This works similarly to Claude Desktop's MCP integration:

  1. The user configures an MCP server URL in ChatGPT's settings
  2. ChatGPT connects to the server and discovers available tools
  3. During conversations, ChatGPT can call these tools when relevant
  4. Tool responses are incorporated into ChatGPT's replies

Connecting MCP Servers to ChatGPT

Since ChatGPT is a web application, it connects to remote MCP servers using HTTP-based transports (SSE or Streamable HTTP). Local stdio servers cannot be used directly.

To make a local MCP server accessible to ChatGPT:

  1. Deploy your server remotely with HTTP transport (see Deploying Remote MCP Servers)
  2. Configure authentication (OAuth 2.1 or API key)
  3. Add the server URL in ChatGPT's tool/integration settings
# Example: Deploy a Python MCP server with SSE transport for ChatGPT
from mcp.server.fastmcp import FastMCP
import uvicorn

mcp = FastMCP("My Tools Server")

@mcp.tool()
async def search_docs(query: str) -> str:
    """Search internal documentation by keyword."""
    # ... implementation ...
    return results

# Run with SSE transport for remote access
if __name__ == "__main__":
    # mcp.run(transport="sse") starts the SSE server
    uvicorn.run(mcp.sse_app(), host="0.0.0.0", port=3001)

Authentication Considerations

ChatGPT's MCP connections require proper authentication since servers are accessed over the public internet:

  • OAuth 2.1 is the recommended authentication method for production MCP servers
  • API key authentication works for simpler setups
  • Servers should implement rate limiting and access controls

Using MCP with the OpenAI API

If you are building applications that use the OpenAI API (GPT-4, GPT-4o, etc.), you can bridge MCP servers with OpenAI's function calling feature.

Building an MCP-to-OpenAI Bridge

The bridge connects to MCP servers, converts tool definitions, and manages the function calling loop:

import asyncio
import json
from openai import OpenAI
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client

# OpenAI client
openai_client = OpenAI()

async def create_mcp_openai_bridge():
    """Bridge MCP servers with OpenAI function calling."""

    # Connect to MCP server
    server_params = StdioServerParameters(
        command="uv",
        args=["--directory", "/path/to/server", "run", "server.py"],
    )

    async with stdio_client(server_params) as (read, write):
        async with ClientSession(read, write) as session:
            await session.initialize()

            # Discover MCP tools
            mcp_tools = await session.list_tools()

            # Convert MCP tool schemas to OpenAI function definitions
            openai_functions = []
            for tool in mcp_tools.tools:
                openai_functions.append({
                    "type": "function",
                    "function": {
                        "name": tool.name,
                        "description": tool.description,
                        "parameters": tool.inputSchema,
                    },
                })

            # Chat loop with function calling
            messages = [
                {"role": "user", "content": "What are the weather alerts in California?"}
            ]

            response = openai_client.chat.completions.create(
                model="gpt-4o",
                messages=messages,
                tools=openai_functions,
            )

            # Handle function calls
            while response.choices[0].message.tool_calls:
                message = response.choices[0].message
                messages.append(message)

                for tool_call in message.tool_calls:
                    # Route the call to the MCP server
                    result = await session.call_tool(
                        tool_call.function.name,
                        json.loads(tool_call.function.arguments),
                    )

                    # Add the result to the conversation
                    messages.append({
                        "role": "tool",
                        "tool_call_id": tool_call.id,
                        "content": result.content[0].text,
                    })

                # Continue the conversation
                response = openai_client.chat.completions.create(
                    model="gpt-4o",
                    messages=messages,
                    tools=openai_functions,
                )

            print(response.choices[0].message.content)

asyncio.run(create_mcp_openai_bridge())

This pattern works with any LLM API that supports function/tool calling, including OpenAI, Google Gemini, Mistral, and Cohere.

Open-Source LLMs and MCP

Open-source models like LLaMA, Mistral, Qwen, and others can use MCP servers through third-party client libraries and bridges.

mcphost: Multi-Provider MCP Client

mcphost is an open-source Go application that bridges MCP servers with multiple LLM providers:

# Install mcphost
go install github.com/nicholasgasior/mcphost@latest

# Configure MCP servers in mcphost.json
{
  "mcpServers": {
    "weather": {
      "command": "uv",
      "args": ["--directory", "/path/to/server", "run", "server.py"]
    }
  }
}
# Run with different LLM providers
mcphost --provider ollama --model llama3.2
mcphost --provider openai --model gpt-4o
mcphost --provider anthropic --model claude-sonnet-4-20250514

mcp-cli: Terminal MCP Client

mcp-cli is a terminal-based MCP client that works with multiple LLM backends:

# Install mcp-cli
pip install mcp-cli

# Configure servers and run
mcp-cli chat --server weather --model ollama/llama3.2

Using MCP with Ollama

Ollama runs open-source models locally. You can bridge MCP servers with Ollama-hosted models:

import asyncio
import json
import httpx
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client


async def mcp_ollama_bridge():
    """Bridge MCP servers with Ollama models."""

    server_params = StdioServerParameters(
        command="uv",
        args=["--directory", "/path/to/server", "run", "server.py"],
    )

    async with stdio_client(server_params) as (read, write):
        async with ClientSession(read, write) as session:
            await session.initialize()

            # Get MCP tools
            mcp_tools = await session.list_tools()

            # Convert to Ollama tool format
            ollama_tools = []
            for tool in mcp_tools.tools:
                ollama_tools.append({
                    "type": "function",
                    "function": {
                        "name": tool.name,
                        "description": tool.description,
                        "parameters": tool.inputSchema,
                    },
                })

            # Call Ollama with tools
            async with httpx.AsyncClient() as client:
                response = await client.post(
                    "http://localhost:11434/api/chat",
                    json={
                        "model": "llama3.2",
                        "messages": [
                            {
                                "role": "user",
                                "content": "Check weather alerts for Texas",
                            }
                        ],
                        "tools": ollama_tools,
                        "stream": False,
                    },
                )

                result = response.json()

                # Handle tool calls from Ollama
                if result.get("message", {}).get("tool_calls"):
                    for call in result["message"]["tool_calls"]:
                        tool_result = await session.call_tool(
                            call["function"]["name"],
                            call["function"]["arguments"],
                        )
                        print(f"Tool result: {tool_result.content[0].text}")


asyncio.run(mcp_ollama_bridge())

Using MCP with LangChain

LangChain provides integration for using MCP tools within its agent framework:

from langchain_mcp_adapters import MCPToolkit
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent

# Connect to MCP server and create LangChain tools
toolkit = MCPToolkit(
    server_params=StdioServerParameters(
        command="uv",
        args=["--directory", "/path/to/server", "run", "server.py"],
    )
)

async def run_agent():
    async with toolkit:
        tools = toolkit.get_tools()

        # Create a LangChain agent with MCP tools
        llm = ChatOpenAI(model="gpt-4o")
        agent = create_react_agent(llm, tools)

        result = await agent.ainvoke({
            "messages": [{"role": "user", "content": "Check CA weather alerts"}]
        })
        print(result)

asyncio.run(run_agent())

Using MCP with LlamaIndex

LlamaIndex also supports MCP tools in its agent framework:

from llama_index.tools.mcp import MCPToolSpec

# Create MCP tool spec
tool_spec = MCPToolSpec(
    server_params=StdioServerParameters(
        command="uv",
        args=["--directory", "/path/to/server", "run", "server.py"],
    )
)

# Get tools for LlamaIndex agents
async def create_agent():
    tools = await tool_spec.to_tool_list_async()

    # Use with any LlamaIndex agent
    from llama_index.agent.openai import OpenAIAgent
    agent = OpenAIAgent.from_tools(tools)
    response = agent.chat("What are the weather alerts in NY?")
    print(response)

Building a Universal MCP Gateway

For organizations using multiple AI platforms, a gateway pattern centralizes MCP server management:

                    ┌──────────────────────┐
                    │    MCP Gateway        │
                    │  (Single entry point) │
                    └──────────┬───────────┘
                               │
              ┌────────────────┼────────────────┐
              ▼                ▼                ▼
        ┌──────────┐   ┌──────────┐   ┌──────────┐
        │ Weather  │   │ Database │   │ GitHub   │
        │ Server   │   │ Server   │   │ Server   │
        └──────────┘   └──────────┘   └──────────┘

Clients: ChatGPT, Claude, Cursor, Custom Apps
         All connect to the gateway URL
// gateway.ts — Simple MCP gateway that proxies multiple servers
import express from "express";
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";

const app = express();

// The gateway acts as both an MCP client (to backend servers)
// and an MCP server (to AI applications)
const serverConfigs = {
  weather: { command: "uv", args: ["--directory", "/path/to/weather", "run", "server.py"] },
  database: { command: "node", args: ["/path/to/db-server/dist/index.js"] },
  github: { command: "npx", args: ["-y", "@modelcontextprotocol/server-github"] },
};

// Gateway aggregates tools from all backend servers
// and exposes them through a single SSE endpoint
app.get("/sse", async (req, res) => {
  // ... create aggregated MCP server
});

app.listen(3000);

Cross-Platform Compatibility Considerations

When building MCP servers that will be used across multiple platforms, keep these differences in mind:

Tool Schema Compatibility

MCP uses JSON Schema for tool input definitions. All platforms that support function/tool calling also use JSON Schema, making conversion straightforward. However, some platforms have restrictions:

PlatformSchema LimitsNotes
OpenAIFull JSON Schema supportMost compatible
ClaudeFull JSON Schema supportNative MCP
OllamaBasic JSON SchemaComplex nested schemas may not work with all models
LLaMA (raw)Depends on fine-tuningTool calling quality varies by model version
MistralFull JSON Schema supportGood tool calling capabilities
Google GeminiFull JSON Schema supportVia function declarations

Best Practices for Multi-Platform Servers

  1. Keep tool schemas simple -- avoid deeply nested objects when possible
  2. Use clear, descriptive tool names -- different models have different quality of tool selection
  3. Return text content -- all platforms handle text responses; image/binary support varies
  4. Test with multiple models -- a tool that works well with Claude may need description tweaks for GPT-4 or LLaMA
  5. Use specific descriptions -- open-source models often need more explicit tool descriptions than frontier models

What to Read Next

Summary

MCP's power lies in its universality. While Claude Desktop offers the most seamless integration, the protocol works with any AI platform through direct support (ChatGPT), API bridges (OpenAI, Google Gemini), open-source clients (mcphost, mcp-cli), and framework integrations (LangChain, LlamaIndex).

The practical implication: every MCP server you build is an investment that pays dividends across the entire AI ecosystem. Build your server once using the standard MCP SDK, deploy it with HTTP transport for remote access, and it becomes available to Claude, ChatGPT, open-source models, and any future platform that adopts the protocol.

Frequently Asked Questions

Does ChatGPT officially support MCP?

OpenAI announced MCP support for ChatGPT in early 2025. ChatGPT can connect to remote MCP servers through its interface, enabling tool use via the MCP protocol. The integration allows ChatGPT users to connect MCP servers and use their tools within conversations, similar to how Claude Desktop works with MCP.

Does Perplexity support MCP servers?

Perplexity has explored MCP integration for its AI search platform. Perplexity's internal systems can use MCP to connect to data sources and tools that enhance search results. Check Perplexity's documentation for the latest status of end-user MCP server configuration.

Can I use MCP servers with open-source models like LLaMA or Mistral?

Yes, through MCP client libraries. Open-source clients like mcp-cli, mcphost, and various community tools can connect to MCP servers and bridge them with any LLM that supports tool/function calling. You run the MCP client locally, connect it to your servers, and it translates between MCP tool calls and the LLM's function calling format.

What is an MCP client library and how do I use one?

An MCP client library implements the client side of the MCP protocol. It connects to MCP servers, discovers their tools, and manages tool call workflows. The official SDKs include client implementations in Python (@modelcontextprotocol/sdk) and TypeScript. Third-party clients add support for specific LLM providers.

How do I bridge MCP servers with the OpenAI API?

Build or use a bridge that converts between MCP tool definitions and OpenAI function calling format. The bridge connects to MCP servers to discover tools, converts MCP tool schemas to OpenAI function definitions, sends them with your API calls, and routes function call responses back through MCP. Several open-source bridges exist for this purpose.

Is the MCP protocol specific to Anthropic or Claude?

MCP is an open protocol, not tied to any specific AI provider. While Anthropic created and maintains the specification, it is designed to be universal. Any AI platform, LLM provider, or application can implement MCP clients or servers. The protocol specification is open-source under the MIT license.

Can I use the same MCP server with both Claude and ChatGPT?

Yes, that is one of the core benefits of MCP. Since MCP is a standardized protocol, any MCP server works with any MCP-compatible client. A server built for Claude Desktop works identically with ChatGPT, Cursor, open-source clients, or any other MCP-compatible application.

What open-source MCP clients are available?

Several open-source MCP clients exist: mcp-cli (terminal-based client), mcphost (Go-based client supporting multiple LLM providers), and various framework integrations for LangChain, LlamaIndex, and other AI frameworks. The official TypeScript and Python SDKs also include client implementations you can use in custom applications.

Related Guides