Zen MCP Server
The Zen MCP Server acts as a universal bridge, allowing users to connect their primary AI assistant—such as Claude Desktop—to a wide variety of other powerful AI brains simultaneously. Instead of being locked into a single model, this tool lets a primary assistant "call a friend" from OpenAI, Google Gemini, …
About this Protocol
How to Use
1. Installation
Prerequisites:
* Python 3.11+
* Node.js >= 14.0.0
* Git
* At least one API key (Gemini, OpenAI, or OpenRouter)
Quick Start:
You can run the server directly using npx:
npx zen-mcp-server-199bio
On the first run, the wrapper will automatically:
1. Check for Python 3.11+.
2. Clone the server to ~/.zen-mcp-server.
3. Create a .env file and prompt for API keys.
4. Set up a Python virtual environment and install dependencies.
2. Configuration
Claude Desktop Configuration
Add the following to your claude_desktop_config.json file:
{
"mcpServers": {
"zen": {
"command": "npx",
"args": ["zen-mcp-server-199bio"],
"env": {
"GEMINI_API_KEY": "your_gemini_key_here",
"OPENAI_API_KEY": "your_openai_key_here",
"OPENROUTER_API_KEY": "your_openrouter_key_here"
}
}
}
}
Config File Locations:
* macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
* Windows: %APPDATA%\Claude\claude_desktop_config.json
* Linux: ~/.config/Claude/claude_desktop_config.json
Claude CLI Configuration
You can also add the server via the CLI:
claude mcp add zen "npx" "zen-mcp-server-199bio"
3. Available Tools
zen: Default tool for quick AI consultation (alias for chat).chat: Collaborative development discussions.thinkdeep: Extended reasoning (utilizes Gemini 2.0 Pro's thinking mode).codereview: Professional code review and architectural analysis.precommit: Pre-commit validation.debug: Advanced debugging assistance with specialized models.analyze: Smart file and codebase analysis.
4. Example Prompts
- "use zen" (for quick AI consultations)
- "use zen to review my code"
- "use thinkdeep to analyze this architectural problem"
Use Cases
Use Case 1: Deep Architectural Reasoning for Complex Systems
Problem: When designing complex software architectures, a single AI model might miss subtle logical fallacies or edge cases in distributed systems. Developers often need a "second opinion" from models specifically optimized for long-chain reasoning.
Solution: This MCP allows you to use the thinkdeep tool to invoke Gemini 2.0 Pro’s thinking mode directly within your Claude Desktop environment. You get the benefit of Claude’s UI and file handling combined with Gemini’s specialized reasoning capabilities.
Example: You are designing a multi-region database failover strategy. You tell Claude: "Use the thinkdeep tool to analyze my current proposal and look for potential data consistency issues during a partial network partition."
Use Case 2: Auditing Massive Codebases with Extended Context
Problem: Standard LLM context windows often struggle with large repositories. If you need to analyze how a change in a low-level utility affects a massive project, you might hit token limits or experience "lost in the middle" degradation.
Solution: By using the analyze tool through Zen, you can leverage Gemini’s 1M+ token context window. This allows the AI to "see" the entire codebase at once without you having to manually copy-paste dozens of files.
Example: You are refactoring a legacy enterprise app. You prompt Claude: "Use the analyze tool via Zen to scan the entire project folder and list every instance where the deprecated 'UserAuth' class is instantiated, then suggest a migration path for each."
Use Case 3: Unbiased "Second-Opinion" Code Reviews
Problem: An AI model can sometimes suffer from confirmation bias, agreeing with its own previous logic or failing to spot its own common patterns. A rigorous developer needs a fresh "set of eyes" to catch security flaws or optimization opportunities.
Solution: The codereview tool enables you to send your code to a completely different model family (e.g., sending Claude-generated code to GPT-4o or an OpenRouter-hosted model). This provides a cross-platform validation of the code's quality.
Example: After Claude helps you write a custom encryption wrapper, you say: "Now use the codereview tool via Zen to have an OpenAI model audit this code for cryptographic vulnerabilities and PEP8 compliance."
Use Case 4: Cross-Model Debugging for Persistent Bugs
Problem: Sometimes a bug is so specific that one model gets "stuck" in a loop of providing the same incorrect fix. Different models are trained on different datasets and may recognize specific library quirks that others don't.
Solution: Use the debug tool to cycle the problem through different models (like O3 or specialized coding models via OpenRouter). This allows for collaborative troubleshooting where one model provides the context and the other provides the fix.
Example: You have a stubborn memory leak in a C++ application. Claude’s suggestions haven't worked. You prompt: "Claude, use the debug tool to ask the most advanced model available via OpenRouter to analyze these heap dumps and identify the source of the leak."