API Reference

API Reference

Complete API documentation for the MCP Proxy Wrapper and plugin system.

Core API

wrapWithProxy(server, options)

Wraps an existing MCP server with proxy functionality and plugin support.

import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { wrapWithProxy } from 'mcp-proxy-wrapper';
 
const proxiedServer = await wrapWithProxy(server, {
  plugins: [],
  hooks?: ProxyHooks,
  debug?: boolean
});

Parameters

ParameterTypeRequiredDescription
serverMcpServerYesMCP server instance to wrap
options.plugins(ProxyPlugin | PluginRegistration)[]NoArray of plugins to apply
options.hooksProxyHooksNoBefore/after tool call hooks
options.debugbooleanNoEnable debug logging (default: false)

Returns

Promise<McpServer> - Enhanced server instance with proxy capabilities

Example

import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { wrapWithProxy, LLMSummarizationPlugin } from 'mcp-proxy-wrapper';
 
const summaryPlugin = new LLMSummarizationPlugin();
summaryPlugin.updateConfig({
  options: {
    provider: 'mock', // Use 'openai' with API key for production
    minContentLength: 100,
    summarizeTools: ['search', 'analyze']
  }
});
 
const server = new McpServer({ name: 'my-server', version: '1.0.0' });
const proxiedServer = await wrapWithProxy(server, {
  plugins: [summaryPlugin],
  debug: true
});

wrapWithEnhancedProxy(server, options) (v2 API)

Enhanced version with advanced lifecycle management and performance features.

import { wrapWithEnhancedProxy, EnhancedProxyWrapperOptions } from 'mcp-proxy-wrapper';
 
const proxiedServer = await wrapWithEnhancedProxy(server, {
  plugins: [],
  hooks?: ProxyHooks,
  lifecycle?: LifecycleConfig,
  execution?: ExecutionConfig,
  performance?: PerformanceConfig
});

Parameters

ParameterTypeRequiredDescription
serverMcpServerYesMCP server instance to wrap
options.plugins(ProxyPlugin | PluginRegistration)[]NoArray of plugins to apply
options.hooksProxyHooksNoBefore/after tool call hooks
options.lifecycleLifecycleConfigNoPlugin lifecycle management
options.executionExecutionConfigNoHook execution configuration
options.performancePerformanceConfigNoPerformance monitoring

Returns

Promise<McpServer> - Enhanced server with v2 proxy capabilities

Plugin Interface

ProxyPlugin

Base interface that all plugins must implement.

interface ProxyPlugin {
  name: string;
  version: string;
  
  // Lifecycle hooks
  beforeToolCall?(context: ToolCallContext): Promise<void | ToolCallResult>;
  afterToolCall?(context: ToolCallContext, result: ToolCallResult): Promise<ToolCallResult>;
  
  // Plugin lifecycle
  initialize?(context: PluginContext): Promise<void>;
  destroy?(): Promise<void>;
}

Properties

PropertyTypeRequiredDescription
namestringYesUnique plugin identifier
versionstringYesPlugin version (semver)

Methods

MethodParametersReturnsDescription
beforeToolCallcontext: ToolCallContextPromise<void | ToolCallResult>Called before tool execution
afterToolCallcontext: ToolCallContext, result: ToolCallResultPromise<ToolCallResult>Called after tool execution
initializecontext: PluginContextPromise<void>Plugin initialization
destroyNonePromise<void>Plugin cleanup

PluginRegistration

Configuration object for registering plugins with specific settings.

interface PluginRegistration {
  plugin: ProxyPlugin;
  config?: PluginConfig;
}

Properties

PropertyTypeRequiredDescription
pluginProxyPluginYesThe plugin instance
configPluginConfigNoPlugin-specific configuration

Example

const proxiedServer = await wrapWithProxy(server, {
  plugins: [
    // Direct plugin registration
    summaryPlugin,
    
    // Plugin with configuration
    {
      plugin: memoryPlugin,
      config: {
        // Plugin-specific settings go inside the 'options' object
        options: {
          maxEntries: 50, // Note: using actual option from ChatMemoryPlugin
          sessionTimeout: 60 * 60 * 1000 // 1 hour
        }
      }
    }
  ]
});

ToolCallContext

Context object provided to plugin hooks during tool execution.

interface ToolCallContext {
  toolName: string;
  args: Record<string, any>;
  metadata: {
    requestId: string;
    timestamp: number;
    userId?: string;
    [key: string]: any;
  };
}

Properties

PropertyTypeDescription
toolNamestringName of the tool being called
argsRecord<string, any>Arguments passed to the tool
metadata.requestIdstringUnique request identifier
metadata.timestampnumberRequest timestamp (Unix milliseconds)
metadata.userIdstring?User identifier (if available)

ToolCallResult

Result object returned from tool execution.

interface ToolCallResult {
  content: Array<{
    type: 'text' | 'image' | 'resource';
    text?: string;
    data?: string;
    url?: string;
    mimeType?: string;
  }>;
  isError?: boolean;
  metadata?: Record<string, any>;
}

Properties

PropertyTypeDescription
contentArray<ContentBlock>Tool response content
isErrorboolean?Indicates if result is an error
metadataRecord<string, any>?Additional result metadata

PluginContext

Context provided during plugin initialization.

interface PluginContext {
  server: McpServer;
  logger: Logger;
  config: Record<string, any>;
}

Core Plugin APIs

LLM Summarization Plugin

import { LLMSummarizationPlugin } from 'mcp-proxy-wrapper';
 
const summaryPlugin = new LLMSummarizationPlugin();
 
// Configuration options
interface SummarizationConfig {
  provider: 'openai' | 'mock';      // AI provider
  openaiApiKey?: string;            // OpenAI API key
  model?: string;                   // Model name (default: gpt-4o-mini)
  maxTokens?: number;               // Max tokens in summary
  temperature?: number;             // Generation temperature
  summarizeTools?: string[];        // Tools to summarize (empty = all)
  minContentLength?: number;        // Min content length to summarize
  saveOriginal?: boolean;           // Save original responses
}
 
// Update plugin configuration
summaryPlugin.updateConfig({
  options: {
    provider: 'openai',
    openaiApiKey: process.env.OPENAI_API_KEY,
    model: 'gpt-4o-mini',
    maxTokens: 150,
    summarizeTools: ['search', 'research', 'analyze'],
    minContentLength: 100
  }
});
 
// Get original result by storage key
const original = await summaryPlugin.getOriginalResult(storageKey);
 
// Get plugin statistics
const stats = await summaryPlugin.getStats();

Chat Memory Plugin

import { ChatMemoryPlugin } from 'mcp-proxy-wrapper';
 
const memoryPlugin = new ChatMemoryPlugin();
 
// Configuration options
interface MemoryConfig {
  provider: 'openai' | 'mock';      // Chat AI provider
  openaiApiKey?: string;            // OpenAI API key
  model?: string;                   // Model for chat responses
  saveResponses?: boolean;          // Save tool responses
  enableChat?: boolean;             // Enable chat functionality
  maxEntries?: number;              // Max stored entries
  maxSessions?: number;             // Max chat sessions
  sessionTimeout?: number;          // Session timeout in ms
  excludeTools?: string[];          // Tools to exclude from saving
}
 
// Update plugin configuration
memoryPlugin.updateConfig({
  options: {
    provider: 'openai',
    openaiApiKey: process.env.OPENAI_API_KEY,
    saveResponses: true,
    enableChat: true,
    maxEntries: 1000,
    sessionTimeout: 24 * 60 * 60 * 1000
  }
});
 
// Start chat session
const sessionId = await memoryPlugin.startChatSession(userId);
 
// Chat with memory
const response = await memoryPlugin.chatWithMemory(
  sessionId, 
  'What data do I have about sales?', 
  userId
);
 
// Search conversations
const results = memoryPlugin.searchConversations('sales metrics', userId);
 
// Get conversation history
const history = memoryPlugin.getConversationHistory(userId, 20);

Plugin Data Types

// LLM Summarization Plugin Types
interface StoredResult {
  originalResult: ToolCallResult;
  context: Omit<PluginContext, 'pluginData'>;
  timestamp: number;
  toolName: string;
  requestId: string;
  metadata?: Record<string, any>;
}
 
interface LLMProvider {
  generateSummary(content: string, prompt: string, options?: any): Promise<string>;
}
 
// Chat Memory Plugin Types
interface ConversationEntry {
  id: string;
  toolName: string;
  request: {
    args: Record<string, any>;
    timestamp: number;
  };
  response: {
    content: string;
    metadata?: Record<string, any>;
    timestamp: number;
  };
  context: {
    requestId: string;
    userId?: string;
    sessionId?: string;
  };
}
 
interface ChatSession {
  id: string;
  userId?: string;
  messages: ChatMessage[];
  createdAt: number;
  lastActivity: number;
}
 
interface ChatMessage {
  id: string;
  type: 'user' | 'assistant' | 'system';
  content: string;
  timestamp: number;
  metadata?: Record<string, any>;
}

Logger Interface

Logger

Standard logging interface used throughout the system.

interface Logger {
  debug(message: string, meta?: any): void;
  info(message: string, meta?: any): void;
  warn(message: string, meta?: any): void;
  error(message: string, meta?: any): void;
}

Built-in Logging

The proxy wrapper includes built-in logging with colored output. Enable debug mode to see detailed execution logs:

const proxiedServer = await wrapWithProxy(server, {
  plugins: [summaryPlugin],
  debug: true  // Enables detailed logging
});

Error Handling

Plugin Errors

Plugin errors are automatically caught and logged without breaking tool execution:

// Plugin error handling
try {
  await plugin.beforeToolCall(context);
} catch (error) {
  console.error(`Plugin ${plugin.name} error:`, error);
  // Tool execution continues
}

Tool Errors

Tools should return error results in MCP format:

// Tool error response
return {
  content: [{
    type: 'text',
    text: 'Error: Invalid input provided'
  }],
  isError: true
};

Plugin Errors

Plugin errors are handled gracefully by the proxy wrapper:

// LLM Summarization error (falls back to original)
return {
  ...result,
  result: {
    ...result.result,
    _meta: {
      ...result.result._meta,
      summarizationError: 'OpenAI API unavailable',
      fallbackToOriginal: true
    }
  }
};
 
// Chat Memory error (logs but doesn't break tool call)
catch (error) {
  this.logger?.error(`Failed to save conversation entry: ${error}`);
  return result; // Return original result
}

Type Definitions

Complete TypeScript Definitions

// Export all types for use in your applications
export {
  ProxyPlugin,
  BasePlugin,
  ToolCallContext,
  ToolCallResult,
  PluginContext,
  PluginConfig,
  PluginMetadata,
  PluginStats,
  Logger
} from 'mcp-proxy-wrapper';

Migration Guide

From Direct MCP Server

// Before: Direct MCP server
const server = new McpServer(config);
server.tool('my-tool', schema, handler);
 
// After: Wrapped with proxy
const proxiedServer = await wrapWithProxy(server, { plugins: [] });
proxiedServer.tool('my-tool', schema, handler);

Adding AI Enhancement

// Add AI summarization to existing setup
import { LLMSummarizationPlugin } from 'mcp-proxy-wrapper';
 
const summaryPlugin = new LLMSummarizationPlugin();
summaryPlugin.updateConfig({
  options: {
    provider: 'openai',
    openaiApiKey: process.env.OPENAI_API_KEY,
    summarizeTools: ['research', 'analyze'],
    minContentLength: 200
  }
});
 
const proxiedServer = await wrapWithProxy(server, {
  plugins: [summaryPlugin]
});

Backward Compatibility: The proxy wrapper maintains full compatibility with existing MCP server code. No changes are required to your tool implementations.

Best Practices

Plugin Development

  1. Error Isolation: Always handle errors gracefully
  2. Performance: Minimize blocking operations in beforeToolCall
  3. Logging: Use structured logging with context
  4. Testing: Write comprehensive tests for plugin logic

Production Deployment

  1. Environment Variables: Use environment-based configuration
  2. Database: Use PostgreSQL for production data storage
  3. Monitoring: Implement health checks and alerting
  4. Security: Follow security best practices for API keys

Performance Optimization

  1. Plugin Priorities: Order plugins by execution cost
  2. Caching: Implement caching for expensive operations
  3. Connection Pooling: Use connection pooling for databases
  4. Rate Limiting: Implement appropriate rate limiting

Ready to build? This API reference covers everything you need to integrate the MCP Proxy Wrapper into your applications.