Getting Started
Add enterprise features to any existing MCP server without changing a single line of your original code. Complete setup in under 5 minutes.
The MCP Proxy Wrapper requires Node.js 18+ and works with any existing MCP server without code changes.
๐ 5-Minute Setup
Step 1: Install (30 seconds)
npm install mcp-proxy-wrapper
Step 2: Wrap Your Existing Server (3 minutes)
Your existing server code (NO CHANGES REQUIRED):
// server.js - Your existing MCP server
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { z } from 'zod';
const server = new McpServer({
name: 'my-existing-server',
version: '1.0.0'
});
// Your existing tools work exactly as before
server.tool('getData', {
query: z.string()
}, async (args) => {
const data = await fetchData(args.query);
return {
content: [{ type: 'text', text: data }]
};
});
Add proxy wrapper (new file: enhanced-server.js):
// enhanced-server.js - Your enhanced server with zero changes to existing code
import { wrapWithProxy, LLMSummarizationPlugin } from 'mcp-proxy-wrapper';
import { server } from './server.js'; // Import your existing server
// Enhance your existing server with AI and monitoring
const enhancedServer = await wrapWithProxy(server, {
plugins: [
new LLMSummarizationPlugin({
options: {
provider: 'openai',
openaiApiKey: process.env.OPENAI_API_KEY,
minContentLength: 200 // Auto-summarize responses over 200 chars
}
})
],
hooks: {
beforeToolCall: async (context) => {
console.log(`๐ง [${new Date().toISOString()}] Calling: ${context.toolName}`);
console.log(`๐ Args:`, context.args);
},
afterToolCall: async (context, result) => {
console.log(`โ
[${new Date().toISOString()}] Completed: ${context.toolName}`);
return result;
}
}
});
// Your existing tools are now enhanced with:
// โ
AI-powered summarization for long responses
// โ
Automatic request/response logging
// โ
Performance monitoring
// โ
Plugin extensibility
Step 3: Use Your Enhanced Server (1 minute)
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
// Start your enhanced server
const transport = new StdioServerTransport();
await enhancedServer.connect(transport);
๐ That's it! Your server now has enterprise features without any changes to your original code.
๐ See the Difference
Before (Your Original Server)
// Call your original tool
const result = await client.callTool({
name: 'getData',
arguments: { query: 'AI trends' }
});
console.log(result.content[0].text);
// Output: "Artificial intelligence trends include machine learning, natural language processing, computer vision..."
// (No logging, no summarization, no monitoring)
After (With Proxy Wrapper)
// Same tool call, enhanced results
const result = await client.callTool({
name: 'getData',
arguments: { query: 'AI trends' }
});
// Console output shows automatic logging:
// ๐ง [2024-01-15T10:30:00.000Z] Calling: getData
// ๐ Args: { query: 'AI trends' }
// โ
[2024-01-15T10:30:02.000Z] Completed: getData
console.log(result.content[0].text);
// Output: "Summary: Key AI trends include ML advances, NLP breakthroughs..."
// (Automatically summarized by AI!)
console.log(result._meta);
// {
// summarized: true,
// originalLength: 1200,
// summaryLength: 150,
// processedAt: "2024-01-15T10:30:02.000Z"
// }
โจ Your server instantly gained:
- ๐ค AI Summarization - Long responses automatically summarized
- ๐ Request Logging - Full visibility into tool usage
- โก Performance Monitoring - Response times and metadata
- ๐ง Extensibility - Easy to add more plugins
- ๐ก๏ธ Enterprise Ready - Authentication and rate limiting hooks available
Your First Plugin
Let's add the LLM Summarization plugin to enhance your tools:
Configure the Plugin
import { LLMSummarizationPlugin } from 'mcp-proxy-wrapper';
const summaryPlugin = new LLMSummarizationPlugin();
summaryPlugin.updateConfig({
options: {
provider: 'openai', // or 'mock' for testing
openaiApiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4o-mini',
maxTokens: 150,
temperature: 0.3,
summarizeTools: ['long-analysis'],
minContentLength: 100
}
});
const proxiedServer = await wrapWithProxy(server, {
plugins: [summaryPlugin]
});
Test Summarization
// This tool now has automatic summarization
proxiedServer.tool('long-analysis', {
data: z.string()
}, async (args) => {
const result = await performLongAnalysis(args.data);
// Plugin automatically summarizes long responses
return result;
});
Development Workflow
Environment Setup
Create a .env
file for your configuration:
# OpenAI API key for LLM plugins
OPENAI_API_KEY=sk-your-openai-key-here
# Optional: Logging level
LOG_LEVEL=debug
Project Structure
my-mcp-server/
โโโ src/
โ โโโ index.ts # Main server file
โ โโโ tools/ # Your tool implementations
โ โโโ config/ # Configuration
โโโ package.json
โโโ .env # Environment variables
โโโ tsconfig.json
Sample Server Implementation
// src/index.ts
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { wrapWithProxy, LLMSummarizationPlugin, ChatMemoryPlugin } from 'mcp-proxy-wrapper';
import { z } from 'zod';
async function main() {
// Create base server
const server = new McpServer({
name: 'my-ai-tools',
version: '1.0.0'
});
// Configure plugins
const plugins = [];
if (process.env.OPENAI_API_KEY) {
const summaryPlugin = new LLMSummarizationPlugin();
summaryPlugin.updateConfig({
options: {
provider: 'openai',
openaiApiKey: process.env.OPENAI_API_KEY,
model: 'gpt-4o-mini',
maxTokens: 150
}
});
plugins.push(summaryPlugin);
const memoryPlugin = new ChatMemoryPlugin();
memoryPlugin.updateConfig({
options: {
saveResponses: true,
maxEntries: 100,
enableChat: true
}
});
plugins.push(memoryPlugin);
}
// Wrap with proxy
const proxiedServer = await wrapWithProxy(server, { plugins });
// Register tools
proxiedServer.tool('text-analysis', {
text: z.string(),
analysisType: z.enum(['sentiment', 'summary', 'keywords'])
}, async (args) => {
// Your AI analysis logic here
const result = await analyzeText(args.text, args.analysisType);
return {
content: [{
type: 'text',
text: JSON.stringify(result, null, 2)
}]
};
});
// Start server
const transport = new StdioServerTransport();
await proxiedServer.connect(transport);
}
main().catch(console.error);
Testing Your Server
Manual Testing with MCP Inspector
# Install MCP Inspector
npm install -g @modelcontextprotocol/inspector
# Test your server
mcp-inspector node dist/index.js
Automated Testing
// tests/server.test.ts
import { describe, test, expect } from '@jest/globals';
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { wrapWithProxy } from 'mcp-proxy-wrapper';
import { z } from 'zod';
describe('My MCP Server', () => {
test('tool returns expected result', async () => {
// Create test server
const server = new McpServer('test-server', '1.0.0');
// Register test tool
server.tool('text-analysis', {
text: z.string(),
analysisType: z.enum(['sentiment', 'readability'])
}, async (args) => {
return {
content: [{ type: 'text', text: `Analysis result: ${args.analysisType} is positive` }]
};
});
const proxiedServer = await wrapWithProxy(server, { plugins: [] });
const result = await proxiedServer.callTool('text-analysis', {
text: 'This is great!',
analysisType: 'sentiment'
});
expect(result.content[0].text).toContain('positive');
});
});
Transport Options
The proxy wrapper supports all MCP transport methods:
// STDIO (most common for CLI tools)
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
const transport = new StdioServerTransport();
await proxiedServer.connect(transport);
Common Patterns
Environment-Based Configuration
const config = {
development: {
logLevel: 'debug',
plugins: []
},
production: {
logLevel: 'info',
plugins: [
(() => {
const plugin = new LLMSummarizationPlugin();
plugin.updateConfig({
options: {
provider: 'openai',
openaiApiKey: process.env.OPENAI_API_KEY!, // Set via environment
model: 'gpt-4o-mini'
}
});
return plugin;
})()
]
}
};
const currentConfig = config[process.env.NODE_ENV || 'development'];
Security Best Practices
API Key Security: Never commit API keys to version control. Always use environment variables or secure secrets management.
Environment Variables
Create a .env
file for local development (never commit this file):
# .env (add to .gitignore!)
NODE_ENV=development
OPENAI_API_KEY=sk-your-openai-key-here
BLOCKCHAIN_API_KEY=your-blockchain-api-key-here
DATABASE_URL=postgresql://user:pass@localhost:5432/myapp
Git Security
Ensure your .gitignore
includes:
# Environment files
.env
.env.local
.env.production
.env.*.local
# API keys and secrets
**/config/secrets.json
**/config/*.key
*.pem
# Build artifacts with embedded secrets
dist/
build/
Production Deployment
Use secure environment variable injection:
# Dockerfile
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY dist/ ./dist/
# Never COPY .env files into Docker images!
# Use runtime environment variables instead
CMD ["node", "dist/index.js"]
# docker-compose.yml or Kubernetes manifests
version: '3.8'
services:
mcp-server:
build: .
environment:
- NODE_ENV=production
# Reference external secrets, never inline API keys
- OPENAI_API_KEY=${OPENAI_API_KEY}
- BLOCKCHAIN_API_KEY=${BLOCKCHAIN_API_KEY}
# Use Docker secrets or external secret management
secrets:
- openai_api_key
- blockchain_api_key
Validation and Sanitization
// Always validate sensitive inputs
proxiedServer.tool('api-call', {
apiKey: z.string().min(20).max(200), // Validate API key format
endpoint: z.string().url(), // Ensure valid URLs only
data: z.object({}).passthrough() // Validate data structure
}, async ({ apiKey, endpoint, data }) => {
// Additional validation
if (!endpoint.startsWith('https://')) {
throw new Error('Only HTTPS endpoints allowed');
}
// Use the validated inputs safely
return await makeSecureApiCall(endpoint, data, apiKey);
});
Error Handling
proxiedServer.tool('risky-operation', schema, async (args) => {
try {
return await performRiskyOperation(args);
} catch (error) {
// Plugin errors are handled automatically
// Tool errors should return MCP error format
return {
content: [{
type: 'text',
text: 'Operation failed'
}],
isError: true
};
}
});
Multiple Plugins
const proxiedServer = await wrapWithProxy(server, {
plugins: [
{ plugin: memoryPlugin, priority: 20 }, // Memory first (higher priority)
{ plugin: summaryPlugin, priority: 10 } // Then summarization (lower priority)
]
});
Next Steps
Your server is now enhanced with plugin capabilities! Explore our other guides to add more functionality.
- How It Works: Understand the proxy wrapper architecture
- Plugins: Add summarization, memory, and more
- Examples: See real-world implementations
- API Reference: Complete API documentation
- Deployment: Deploy to production
Troubleshooting
Common Issues
Plugin not loading:
# Check your environment variables
echo $OPENAI_API_KEY
# Verify plugin configuration
npm run test
Tool calls failing:
// Add debug logging
const proxiedServer = await wrapWithProxy(server, {
plugins: [plugin],
debug: true
});
TypeScript errors:
# Ensure you have the latest types
npm install --save-dev @types/node
Need more help? Check our troubleshooting guide or open an issue on GitHub.