MCP Clients for Enterprise: Custom App Development
Building enterprise-grade MCP client applications — custom integrations, security considerations, multi-tenant architectures, and production patterns.
Enterprise MCP integration goes beyond connecting Claude Desktop to a server. Building a custom MCP client lets you embed AI-tool capabilities directly into your applications, with enterprise-grade authentication, multi-tenant isolation, audit logging, and role-based access control. This guide covers the architecture patterns, implementation details, and operational considerations for enterprise MCP deployments.
Whether you are building an internal AI assistant, adding MCP-powered features to a SaaS product, or creating an enterprise platform that connects multiple AI models with business tools, this guide provides the blueprint.
Why Build a Custom MCP Client
Off-the-shelf clients like Claude Desktop work well for individual users, but enterprise use cases often require capabilities they do not provide:
| Requirement | Claude Desktop | Custom MCP Client |
|---|---|---|
| Custom UI/UX | Fixed interface | Fully customizable |
| SSO/SAML integration | Not supported | Full control |
| Multi-tenant isolation | Single user | Per-tenant sessions |
| Audit logging | Basic | Comprehensive, compliant |
| Tool-level permissions | All or nothing | Role-based (RBAC) |
| Branding | Anthropic branding | White-label |
| Embedding in existing apps | Standalone only | Embedded anywhere |
| LLM provider choice | Claude only | Any LLM provider |
| Data residency controls | Cloud-dependent | Full control |
Architecture Patterns
Pattern 1: Direct Client-Server
The simplest enterprise pattern: your application acts as an MCP client connecting directly to MCP servers.
┌─────────────────────────────┐
│ Your Application │
│ ┌───────────────────────┐ │
│ │ MCP Client SDK │ │
│ └───────────┬───────────┘ │
│ │ │
│ ┌───────────▼───────────┐ │
│ │ LLM Provider │ │
│ │ (Claude API / GPT-4) │ │
│ └───────────────────────┘ │
└──────────────┬──────────────┘
│
┌──────────┼──────────┐
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ CRM │ │Database│ │ Git │
│ Server │ │ Server │ │ Server │
└────────┘ └────────┘ └────────┘
Best for: Small teams, internal tools, single-application deployments.
Pattern 2: MCP Gateway
A centralized gateway handles authentication, routing, and observability. Your application connects to the gateway rather than individual servers.
┌─────────────────────────┐
│ Your Application(s) │
└───────────┬─────────────┘
│ HTTPS
┌───────────▼─────────────┐
│ MCP Gateway │
│ ┌───────────────────┐ │
│ │ Auth / OAuth 2.1 │ │
│ │ Rate Limiting │ │
│ │ Audit Logging │ │
│ │ RBAC Enforcement │ │
│ │ Request Routing │ │
│ └───────────────────┘ │
└──────────┬──────────────┘
│
┌───────┼───────┐
▼ ▼ ▼
┌──────┐┌──────┐┌──────┐
│Server││Server││Server│
│ A ││ B ││ C │
└──────┘└──────┘└──────┘
Best for: Multi-application environments, strict compliance requirements, multi-tenant SaaS.
Pattern 3: Sidecar / Service Mesh
In Kubernetes environments, deploy MCP servers as sidecars alongside your application pods:
# Kubernetes pod with MCP sidecar
apiVersion: v1
kind: Pod
metadata:
name: app-with-mcp
spec:
containers:
- name: app
image: my-app:latest
ports:
- containerPort: 8080
- name: mcp-database-server
image: mcp-postgres-server:latest
env:
- name: POSTGRES_URL
valueFrom:
secretKeyRef:
name: db-secrets
key: url
- name: mcp-crm-server
image: mcp-salesforce-server:latest
env:
- name: SALESFORCE_TOKEN
valueFrom:
secretKeyRef:
name: crm-secrets
key: token
Best for: Microservices architectures, Kubernetes-native deployments.
Building a Custom MCP Client
TypeScript Client Implementation
Here is a production-ready MCP client that connects to servers, manages sessions, and provides a clean API for tool invocation:
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { SSEClientTransport } from "@modelcontextprotocol/sdk/client/sse.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
interface ServerConfig {
name: string;
type: "stdio" | "sse";
command?: string;
args?: string[];
url?: string;
headers?: Record<string, string>;
}
interface ToolCall {
serverName: string;
toolName: string;
arguments: Record<string, unknown>;
}
class EnterpriseMCPClient {
private clients: Map<string, Client> = new Map();
private toolRegistry: Map<string, { server: string; schema: unknown }> = new Map();
async connectServer(config: ServerConfig): Promise<void> {
const client = new Client(
{ name: "enterprise-client", version: "1.0.0" },
{ capabilities: {} }
);
let transport;
if (config.type === "sse" && config.url) {
transport = new SSEClientTransport(new URL(config.url), {
requestInit: {
headers: config.headers || {},
},
});
} else if (config.type === "stdio" && config.command) {
transport = new StdioClientTransport({
command: config.command,
args: config.args || [],
});
} else {
throw new Error(`Invalid server config for ${config.name}`);
}
await client.connect(transport);
this.clients.set(config.name, client);
// Discover and register tools
const tools = await client.listTools();
for (const tool of tools.tools) {
this.toolRegistry.set(tool.name, {
server: config.name,
schema: tool.inputSchema,
});
}
console.error(
`Connected to ${config.name}: ${tools.tools.length} tools available`
);
}
async callTool(toolName: string, args: Record<string, unknown>): Promise<string> {
const registration = this.toolRegistry.get(toolName);
if (!registration) {
throw new Error(`Tool not found: ${toolName}`);
}
const client = this.clients.get(registration.server);
if (!client) {
throw new Error(`Server not connected: ${registration.server}`);
}
const result = await client.callTool({
name: toolName,
arguments: args,
});
// Extract text content
return result.content
.filter((c): c is { type: "text"; text: string } => c.type === "text")
.map((c) => c.text)
.join("\n");
}
getAvailableTools(): Array<{ name: string; server: string; schema: unknown }> {
return Array.from(this.toolRegistry.entries()).map(
([name, { server, schema }]) => ({ name, server, schema })
);
}
async disconnect(): Promise<void> {
for (const [name, client] of this.clients) {
await client.close();
console.error(`Disconnected from ${name}`);
}
this.clients.clear();
this.toolRegistry.clear();
}
}
Python Client Implementation
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from mcp.client.sse import sse_client
from dataclasses import dataclass
from typing import Any
@dataclass
class ServerConfig:
name: str
transport: str # "stdio" or "sse"
command: str | None = None
args: list[str] | None = None
url: str | None = None
headers: dict[str, str] | None = None
class EnterpriseMCPClient:
def __init__(self):
self.sessions: dict[str, ClientSession] = {}
self.tool_registry: dict[str, str] = {} # tool_name -> server_name
async def connect_server(self, config: ServerConfig) -> None:
"""Connect to an MCP server and discover its tools."""
if config.transport == "stdio":
params = StdioServerParameters(
command=config.command,
args=config.args or [],
)
read, write = await stdio_client(params).__aenter__()
session = ClientSession(read, write)
elif config.transport == "sse":
read, write = await sse_client(
config.url,
headers=config.headers or {},
).__aenter__()
session = ClientSession(read, write)
else:
raise ValueError(f"Unknown transport: {config.transport}")
await session.initialize()
self.sessions[config.name] = session
# Discover tools
tools = await session.list_tools()
for tool in tools.tools:
self.tool_registry[tool.name] = config.name
print(
f"Connected to {config.name}: "
f"{len(tools.tools)} tools available"
)
async def call_tool(
self, tool_name: str, arguments: dict[str, Any]
) -> str:
"""Call a tool by name, routing to the correct server."""
server_name = self.tool_registry.get(tool_name)
if not server_name:
raise ValueError(f"Tool not found: {tool_name}")
session = self.sessions.get(server_name)
if not session:
raise ValueError(f"Server not connected: {server_name}")
result = await session.call_tool(tool_name, arguments)
return "\n".join(
c.text for c in result.content if hasattr(c, "text")
)
def get_available_tools(self) -> list[dict]:
"""List all available tools across connected servers."""
return [
{"name": name, "server": server}
for name, server in self.tool_registry.items()
]
Authentication and SSO Integration
OAuth 2.1 Flow for MCP
MCP's remote transport supports OAuth 2.1 for authentication. In enterprise environments, integrate this with your identity provider (IdP):
// Enterprise OAuth 2.1 integration for MCP
import { SSEClientTransport } from "@modelcontextprotocol/sdk/client/sse.js";
class EnterpriseAuth {
private idpUrl: string;
private clientId: string;
private clientSecret: string;
constructor(config: { idpUrl: string; clientId: string; clientSecret: string }) {
this.idpUrl = config.idpUrl;
this.clientId = config.clientId;
this.clientSecret = config.clientSecret;
}
async getAccessToken(userId: string): Promise<string> {
// Exchange user credentials for an access token
// This integrates with your enterprise IdP (Okta, Azure AD, etc.)
const response = await fetch(`${this.idpUrl}/oauth/token`, {
method: "POST",
headers: { "Content-Type": "application/x-www-form-urlencoded" },
body: new URLSearchParams({
grant_type: "client_credentials",
client_id: this.clientId,
client_secret: this.clientSecret,
scope: "mcp:tools mcp:resources",
sub: userId,
}),
});
const { access_token } = await response.json();
return access_token;
}
async createAuthenticatedTransport(
serverUrl: string,
userId: string
): Promise<SSEClientTransport> {
const token = await this.getAccessToken(userId);
return new SSEClientTransport(new URL(serverUrl), {
requestInit: {
headers: {
Authorization: `Bearer ${token}`,
},
},
});
}
}
Server-Side Authentication
On the MCP server side, validate tokens and extract user context:
import express from "express";
import jwt from "jsonwebtoken";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
const app = express();
// Authentication middleware
function authenticateToken(req: express.Request, res: express.Response, next: express.NextFunction) {
const authHeader = req.headers["authorization"];
const token = authHeader?.split(" ")[1];
if (!token) {
return res.status(401).json({ error: "Authentication required" });
}
try {
const decoded = jwt.verify(token, process.env.JWT_SECRET!) as {
sub: string;
roles: string[];
tenantId: string;
};
req.user = decoded;
next();
} catch {
return res.status(403).json({ error: "Invalid token" });
}
}
// Protected SSE endpoint
app.get("/sse", authenticateToken, async (req, res) => {
const { sub: userId, roles, tenantId } = req.user;
// Create a server instance scoped to this user/tenant
const server = createServerWithPermissions(userId, roles, tenantId);
const transport = new SSEServerTransport("/messages", res);
await server.connect(transport);
});
Multi-Tenant Architecture
Tenant Isolation Strategies
| Strategy | Isolation Level | Complexity | Use Case |
|---|---|---|---|
| Shared server, tenant context | Logical | Low | SaaS with shared infrastructure |
| Server instance per tenant | Process | Medium | Strong isolation requirements |
| Container per tenant | Container | High | Regulated industries, max isolation |
| Cluster per tenant | Infrastructure | Very High | Largest enterprises, data residency |
Shared Server with Tenant Context
The simplest approach: a single server handles all tenants, using the authentication token to determine data access:
@mcp.tool()
async def query_data(table: str, filters: dict) -> str:
"""Query tenant-scoped data."""
# Get tenant from the current request context
tenant_id = mcp.request_context.tenant_id
# All queries are automatically scoped to the tenant
results = await database.query(
f"SELECT * FROM {table} WHERE tenant_id = $1",
[tenant_id],
)
return format_results(results)
Server Instance Per Tenant
For stronger isolation, spin up dedicated server instances:
class TenantManager {
private tenantServers: Map<string, Server> = new Map();
async getServerForTenant(tenantId: string): Promise<Server> {
if (this.tenantServers.has(tenantId)) {
return this.tenantServers.get(tenantId)!;
}
// Create a new server instance with tenant-specific config
const server = createServer({
databaseUrl: await getSecretForTenant(tenantId, "DATABASE_URL"),
apiKey: await getSecretForTenant(tenantId, "API_KEY"),
dataPrefix: tenantId,
});
this.tenantServers.set(tenantId, server);
return server;
}
}
Role-Based Access Control (RBAC)
Dynamic Tool Filtering
Filter the tools list based on user roles:
server.setRequestHandler(ListToolsRequestSchema, async (request) => {
const userRoles = getCurrentUserRoles(); // from auth context
const allTools = [
{ name: "read_data", requiredRole: "viewer", /* ... */ },
{ name: "write_data", requiredRole: "editor", /* ... */ },
{ name: "delete_data", requiredRole: "admin", /* ... */ },
{ name: "manage_users", requiredRole: "admin", /* ... */ },
];
// Only return tools the user is authorized to use
const authorizedTools = allTools.filter((tool) =>
userRoles.includes(tool.requiredRole)
);
return {
tools: authorizedTools.map(({ requiredRole, ...tool }) => tool),
};
});
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const userRoles = getCurrentUserRoles();
const { name } = request.params;
// Double-check authorization on every tool call
const toolPermissions: Record<string, string> = {
read_data: "viewer",
write_data: "editor",
delete_data: "admin",
manage_users: "admin",
};
const requiredRole = toolPermissions[name];
if (requiredRole && !userRoles.includes(requiredRole)) {
return {
content: [{ type: "text", text: `Access denied: ${name} requires ${requiredRole} role.` }],
isError: true,
};
}
// ... handle the tool call
});
Audit Logging and Compliance
Comprehensive Audit Trail
interface AuditEntry {
timestamp: string;
userId: string;
tenantId: string;
action: "tool_call" | "resource_read" | "prompt_use";
toolName?: string;
resourceUri?: string;
inputSummary: string; // Redacted summary of inputs
outputSummary: string; // Redacted summary of outputs
duration: number; // milliseconds
status: "success" | "error";
clientIp: string;
sessionId: string;
}
class AuditLogger {
private store: AuditStore;
async logToolCall(
context: RequestContext,
toolName: string,
input: unknown,
output: unknown,
duration: number,
status: "success" | "error"
): Promise<void> {
const entry: AuditEntry = {
timestamp: new Date().toISOString(),
userId: context.userId,
tenantId: context.tenantId,
action: "tool_call",
toolName,
inputSummary: this.redactPII(JSON.stringify(input)).substring(0, 500),
outputSummary: this.redactPII(JSON.stringify(output)).substring(0, 500),
duration,
status,
clientIp: context.clientIp,
sessionId: context.sessionId,
};
await this.store.append(entry);
}
private redactPII(text: string): string {
// Redact common PII patterns
return text
.replace(/\b[\w.+-]+@[\w-]+\.[\w.]+\b/g, "[EMAIL_REDACTED]")
.replace(/\b\d{3}-\d{2}-\d{4}\b/g, "[SSN_REDACTED]")
.replace(/\b\d{16}\b/g, "[CC_REDACTED]");
}
}
Compliance-Ready Logging
For SOC 2 and GDPR compliance:
- Immutable logs: Use append-only storage (AWS CloudTrail, Azure Immutable Blob Storage)
- Retention policies: Automatically archive/delete logs per your data retention policy
- PII handling: Redact sensitive data before logging, or log references rather than values
- Access controls: Restrict who can read audit logs
- Integrity verification: Use cryptographic hashing to detect log tampering
Rate Limiting and Resource Management
import rateLimit from "express-rate-limit";
// Per-user rate limiting
const toolCallLimiter = rateLimit({
windowMs: 60 * 1000, // 1 minute
max: 100, // 100 tool calls per minute per user
keyGenerator: (req) => req.user?.sub || req.ip,
message: { error: "Rate limit exceeded. Please wait before making more tool calls." },
});
app.post("/messages", authenticateToken, toolCallLimiter, async (req, res) => {
// ... handle MCP messages
});
// Per-tenant rate limiting for shared servers
const tenantLimiter = rateLimit({
windowMs: 60 * 1000,
max: 1000,
keyGenerator: (req) => req.user?.tenantId || "unknown",
});
Monitoring Enterprise MCP Deployments
Key Metrics to Track
| Metric | Description | Alert Threshold |
|---|---|---|
mcp.tool_calls.total | Total tool calls (by tool, server, tenant) | N/A (informational) |
mcp.tool_calls.errors | Failed tool calls | > 5% error rate |
mcp.tool_calls.duration_p95 | 95th percentile latency | > 5 seconds |
mcp.sessions.active | Currently active MCP sessions | > 80% capacity |
mcp.auth.failures | Authentication failures | > 10 per minute |
mcp.ratelimit.exceeded | Rate limit hits | > 50 per minute |
Health Dashboard
Build a dashboard that shows:
- Real-time: Active sessions, tool call rate, error rate
- Trends: Usage patterns by tenant, tool popularity, latency trends
- Alerts: Authentication failures, server downtime, rate limit spikes
- Compliance: Audit log completeness, data access patterns
What to Read Next
- Understand MCP security: MCP Security Model
- Enterprise use cases in detail: Enterprise Use Cases
- MCP architecture deep dive: MCP Architecture Explained
- Deploy remote servers: Deploying Remote MCP Servers
- Browse production servers: MCP Server Directory
Summary
Enterprise MCP client development requires thinking beyond simple tool calling. The key components are: custom clients that embed MCP into your applications, an authentication layer that integrates with enterprise identity providers, multi-tenant isolation that keeps customer data separate, RBAC that controls which users can access which tools, audit logging for compliance, and monitoring for operational visibility.
Start with the direct client-server pattern and add the gateway layer when your deployment grows to multiple applications or strict compliance requirements. The MCP SDKs handle the protocol complexity, letting you focus on the enterprise integration patterns that matter for your organization.
Frequently Asked Questions
What is an MCP client and how does it differ from an MCP server?
An MCP client is an application that connects to MCP servers to consume their tools, resources, and prompts. Claude Desktop, Cursor, and ChatGPT are examples of MCP clients. MCP servers expose capabilities; MCP clients discover and use them. In an enterprise context, you build custom clients that embed MCP into your own applications.
When should an enterprise build a custom MCP client instead of using Claude Desktop?
Build a custom client when you need: integration with your existing application UI, multi-tenant user isolation, custom authentication flows (SSO/SAML), audit logging for compliance, per-user or per-team tool permissions, white-labeling, or when you want to embed AI-tool capabilities into an existing product.
What SDKs are available for building MCP clients?
The official MCP SDKs include client implementations in both TypeScript (@modelcontextprotocol/sdk) and Python (mcp package). Both support stdio and SSE transports. The TypeScript SDK's Client class and Python's ClientSession class handle the full MCP protocol lifecycle including capability negotiation, tool discovery, and tool calling.
How do I implement multi-tenant MCP in an enterprise application?
Each tenant should have isolated MCP sessions. Deploy separate server instances per tenant, or use a single server with tenant context passed via authentication tokens. Ensure tool calls include tenant identification, and implement data isolation at the server level so one tenant cannot access another's data.
How does MCP handle authentication in enterprise environments?
MCP supports OAuth 2.1 for remote server authentication. For enterprise SSO, implement an OAuth 2.1 authorization server that integrates with your IdP (Okta, Azure AD, etc.). The MCP client obtains tokens through the standard OAuth flow and passes them with each request. The server validates tokens and applies appropriate permissions.
Can I add role-based access control (RBAC) to MCP tools?
Yes, implement RBAC at the server level. When a client connects with an authentication token, the server extracts the user's roles and only exposes tools that the user is authorized to use. The ListTools response can be dynamically filtered based on the authenticated user's permissions.
How do I audit MCP tool usage for compliance?
Implement an audit middleware in your MCP server or client that logs every tool call with: timestamp, user identity, tool name, input parameters (with PII redaction), response summary, and client IP. Store audit logs in a tamper-resistant system (append-only database, AWS CloudTrail, or similar). This supports SOC 2, HIPAA, and GDPR compliance requirements.
What is the recommended architecture for enterprise MCP deployments?
Use a hub-and-spoke model: a central MCP gateway handles authentication, routing, and audit logging, while individual MCP servers handle specific tool domains (database, CRM, file system). Deploy the gateway behind your API gateway or service mesh. Each server runs in its own container with least-privilege access to backend systems.
Related Guides
A comprehensive breakdown of the MCP architecture — how clients, servers, hosts, and transports work together to enable AI-tool communication.
The complete guide to MCP security — OAuth 2.1 authentication, permission models, transport security, and securing your MCP deployments.
Enterprise deployments of MCP — secure data access patterns, compliance, multi-tenant architectures, and real-world case studies from organizations using MCP.