⚠ This page is served via a proxy. Original site: https://github.com
This service does not collect credentials or authentication data.
Skip to content

A Model Context Protocol (MCP) server that provides access to multiple Large Language Model (LLM) APIs including ChatGPT, Claude, Gemini, Mistral, Kimi K2, and DeepSeek.

License

Notifications You must be signed in to change notification settings

JamesANZ/cross-llm-mcp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

29 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€– Cross-LLM MCP Server

Access multiple LLM APIs from one place. Call ChatGPT, Claude, DeepSeek, Gemini, Grok, Kimi, Perplexity, and Mistral with intelligent model selection, preferences, and prompt logging.

An MCP (Model Context Protocol) server that provides unified access to multiple Large Language Model APIs for AI coding environments like Cursor and Claude Desktop.

Trust Score

Why Use Cross-LLM MCP?

  • 🌐 8 LLM Providers – ChatGPT, Claude, DeepSeek, Gemini, Grok, Kimi, Perplexity, Mistral
  • 🎯 Smart Model Selection – Tag-based preferences (coding, business, reasoning, math, creative, general)
  • πŸ“Š Prompt Logging – Track all prompts with history, statistics, and analytics
  • πŸ’° Cost Optimization – Choose flagship or cheaper models based on preference
  • ⚑ Easy Setup – One-click install in Cursor or simple manual setup
  • πŸ”„ Call All LLMs – Get responses from all providers simultaneously

Quick Start

Ready to access multiple LLMs? Install in seconds:

Install in Cursor (Recommended):

πŸ”— Install in Cursor

Or install manually:

npm install -g cross-llm-mcp
# Or from source:
git clone https://github.com/JamesANZ/cross-llm-mcp.git
cd cross-llm-mcp && npm install && npm run build

Features

πŸ€– Individual LLM Tools

  • call-chatgpt – OpenAI's ChatGPT API
  • call-claude – Anthropic's Claude API
  • call-deepseek – DeepSeek API
  • call-gemini – Google's Gemini API
  • call-grok – xAI's Grok API
  • call-kimi – Moonshot AI's Kimi API
  • call-perplexity – Perplexity AI API
  • call-mistral – Mistral AI API

πŸ”„ Combined Tools

  • call-all-llms – Call all LLMs with the same prompt
  • call-llm – Call a specific provider by name

βš™οΈ Preferences & Model Selection

  • get-user-preferences – Get current preferences
  • set-user-preferences – Set default model, cost preference, and tag-based preferences
  • get-models-by-tag – Find models by tag (coding, business, reasoning, math, creative, general)

πŸ“ Prompt Logging

  • get-prompt-history – View prompt history with filters
  • get-prompt-stats – Get statistics about prompt logs
  • delete-prompt-entries – Delete log entries by criteria
  • clear-prompt-history – Clear all prompt logs

Installation

Cursor (One-Click)

Click the install link above or use:

cursor://anysphere.cursor-deeplink/mcp/install?name=cross-llm-mcp&config=eyJjcm9zcy1sbG0tbWNwIjp7ImNvbW1hbmQiOiJucHgiLCJhcmdzIjpbIi15IiwiY3Jvc3MtbGxtLW1jcCJdfX0=

After installation, add your API keys in Cursor settings (see Configuration below).

Manual Installation

Requirements: Node.js 18+ and npm

# Clone and build
git clone https://github.com/JamesANZ/cross-llm-mcp.git
cd cross-llm-mcp
npm install
npm run build

Claude Desktop

Add to claude_desktop_config.json:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json

{
  "mcpServers": {
    "cross-llm-mcp": {
      "command": "node",
      "args": ["/absolute/path/to/cross-llm-mcp/build/index.js"],
      "env": {
        "OPENAI_API_KEY": "your_openai_api_key_here",
        "ANTHROPIC_API_KEY": "your_anthropic_api_key_here",
        "DEEPSEEK_API_KEY": "your_deepseek_api_key_here",
        "GEMINI_API_KEY": "your_gemini_api_key_here",
        "XAI_API_KEY": "your_grok_api_key_here",
        "KIMI_API_KEY": "your_kimi_api_key_here",
        "PERPLEXITY_API_KEY": "your_perplexity_api_key_here",
        "MISTRAL_API_KEY": "your_mistral_api_key_here"
      }
    }
  }
}

Restart Claude Desktop after configuration.

Configuration

API Keys

Set environment variables for the LLM providers you want to use:

export OPENAI_API_KEY="your_openai_api_key"
export ANTHROPIC_API_KEY="your_anthropic_api_key"
export DEEPSEEK_API_KEY="your_deepseek_api_key"
export GEMINI_API_KEY="your_gemini_api_key"
export XAI_API_KEY="your_grok_api_key"
export KIMI_API_KEY="your_kimi_api_key"
export PERPLEXITY_API_KEY="your_perplexity_api_key"
export MISTRAL_API_KEY="your_mistral_api_key"

Getting API Keys

Usage Examples

Call ChatGPT

Get a response from OpenAI:

{
  "tool": "call-chatgpt",
  "arguments": {
    "prompt": "Explain quantum computing in simple terms",
    "temperature": 0.7,
    "max_tokens": 500
  }
}

Call All LLMs

Get responses from all providers:

{
  "tool": "call-all-llms",
  "arguments": {
    "prompt": "Write a short poem about AI",
    "temperature": 0.8
  }
}

Set Tag-Based Preferences

Automatically use the best model for each task type:

{
  "tool": "set-user-preferences",
  "arguments": {
    "defaultModel": "gpt-4o",
    "costPreference": "cheaper",
    "tagPreferences": {
      "coding": "deepseek-r1",
      "general": "gpt-4o",
      "business": "claude-3.5-sonnet-20241022",
      "reasoning": "deepseek-r1",
      "math": "deepseek-r1",
      "creative": "gpt-4o"
    }
  }
}

Get Prompt History

View your prompt logs:

{
  "tool": "get-prompt-history",
  "arguments": {
    "provider": "chatgpt",
    "limit": 10
  }
}

Model Tags

Models are tagged by their strengths:

  • coding: deepseek-r1, deepseek-coder, gpt-4o, claude-3.5-sonnet-20241022
  • business: claude-3-opus-20240229, gpt-4o, gemini-1.5-pro
  • reasoning: deepseek-r1, o1-preview, claude-3.5-sonnet-20241022
  • math: deepseek-r1, o1-preview, o1-mini
  • creative: gpt-4o, claude-3-opus-20240229, gemini-1.5-pro
  • general: gpt-4o-mini, claude-3-haiku-20240307, gemini-1.5-flash

Use Cases

  • Multi-Perspective Analysis – Get different perspectives from multiple LLMs
  • Model Comparison – Compare responses to understand strengths and weaknesses
  • Cost Optimization – Choose the most cost-effective model for each task
  • Quality Assurance – Cross-reference responses from multiple models
  • Intelligent Selection – Automatically use the best model for coding, business, reasoning, etc.
  • Prompt Analytics – Track usage, costs, and patterns with automatic logging

Technical Details

Built with: Node.js, TypeScript, MCP SDK
Dependencies: @modelcontextprotocol/sdk, superagent, zod
Platforms: macOS, Windows, Linux

Preference Storage:

  • Unix/macOS: ~/.cross-llm-mcp/preferences.json
  • Windows: %APPDATA%/cross-llm-mcp/preferences.json

Prompt Log Storage:

  • Unix/macOS: ~/.cross-llm-mcp/prompts.json
  • Windows: %APPDATA%/cross-llm-mcp/prompts.json

Contributing

⭐ If this project helps you, please star it on GitHub! ⭐

Contributions welcome! Please open an issue or submit a pull request.

License

MIT License – see LICENSE.md for details.

Support

If you find this project useful, consider supporting it:

⚑ Lightning Network

lnbc1pjhhsqepp5mjgwnvg0z53shm22hfe9us289lnaqkwv8rn2s0rtekg5vvj56xnqdqqcqzzsxqyz5vqsp5gu6vh9hyp94c7t3tkpqrp2r059t4vrw7ps78a4n0a2u52678c7yq9qyyssq7zcferywka50wcy75skjfrdrk930cuyx24rg55cwfuzxs49rc9c53mpz6zug5y2544pt8y9jflnq0ltlha26ed846jh0y7n4gm8jd3qqaautqa

β‚Ώ Bitcoin: bc1ptzvr93pn959xq4et6sqzpfnkk2args22ewv5u2th4ps7hshfaqrshe0xtp

Ξ Ethereum/EVM: 0x42ea529282DDE0AA87B42d9E83316eb23FE62c3f

About

A Model Context Protocol (MCP) server that provides access to multiple Large Language Model (LLM) APIs including ChatGPT, Claude, Gemini, Mistral, Kimi K2, and DeepSeek.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •