Access multiple LLM APIs from one place. Call ChatGPT, Claude, DeepSeek, Gemini, Grok, Kimi, Perplexity, and Mistral with intelligent model selection, preferences, and prompt logging.
An MCP (Model Context Protocol) server that provides unified access to multiple Large Language Model APIs for AI coding environments like Cursor and Claude Desktop.
- π 8 LLM Providers β ChatGPT, Claude, DeepSeek, Gemini, Grok, Kimi, Perplexity, Mistral
- π― Smart Model Selection β Tag-based preferences (coding, business, reasoning, math, creative, general)
- π Prompt Logging β Track all prompts with history, statistics, and analytics
- π° Cost Optimization β Choose flagship or cheaper models based on preference
- β‘ Easy Setup β One-click install in Cursor or simple manual setup
- π Call All LLMs β Get responses from all providers simultaneously
Ready to access multiple LLMs? Install in seconds:
Install in Cursor (Recommended):
π Install in Cursor
Or install manually:
npm install -g cross-llm-mcp
# Or from source:
git clone https://github.com/JamesANZ/cross-llm-mcp.git
cd cross-llm-mcp && npm install && npm run buildcall-chatgptβ OpenAI's ChatGPT APIcall-claudeβ Anthropic's Claude APIcall-deepseekβ DeepSeek APIcall-geminiβ Google's Gemini APIcall-grokβ xAI's Grok APIcall-kimiβ Moonshot AI's Kimi APIcall-perplexityβ Perplexity AI APIcall-mistralβ Mistral AI API
call-all-llmsβ Call all LLMs with the same promptcall-llmβ Call a specific provider by name
get-user-preferencesβ Get current preferencesset-user-preferencesβ Set default model, cost preference, and tag-based preferencesget-models-by-tagβ Find models by tag (coding, business, reasoning, math, creative, general)
get-prompt-historyβ View prompt history with filtersget-prompt-statsβ Get statistics about prompt logsdelete-prompt-entriesβ Delete log entries by criteriaclear-prompt-historyβ Clear all prompt logs
Click the install link above or use:
cursor://anysphere.cursor-deeplink/mcp/install?name=cross-llm-mcp&config=eyJjcm9zcy1sbG0tbWNwIjp7ImNvbW1hbmQiOiJucHgiLCJhcmdzIjpbIi15IiwiY3Jvc3MtbGxtLW1jcCJdfX0=
After installation, add your API keys in Cursor settings (see Configuration below).
Requirements: Node.js 18+ and npm
# Clone and build
git clone https://github.com/JamesANZ/cross-llm-mcp.git
cd cross-llm-mcp
npm install
npm run buildAdd to claude_desktop_config.json:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"cross-llm-mcp": {
"command": "node",
"args": ["/absolute/path/to/cross-llm-mcp/build/index.js"],
"env": {
"OPENAI_API_KEY": "your_openai_api_key_here",
"ANTHROPIC_API_KEY": "your_anthropic_api_key_here",
"DEEPSEEK_API_KEY": "your_deepseek_api_key_here",
"GEMINI_API_KEY": "your_gemini_api_key_here",
"XAI_API_KEY": "your_grok_api_key_here",
"KIMI_API_KEY": "your_kimi_api_key_here",
"PERPLEXITY_API_KEY": "your_perplexity_api_key_here",
"MISTRAL_API_KEY": "your_mistral_api_key_here"
}
}
}
}Restart Claude Desktop after configuration.
Set environment variables for the LLM providers you want to use:
export OPENAI_API_KEY="your_openai_api_key"
export ANTHROPIC_API_KEY="your_anthropic_api_key"
export DEEPSEEK_API_KEY="your_deepseek_api_key"
export GEMINI_API_KEY="your_gemini_api_key"
export XAI_API_KEY="your_grok_api_key"
export KIMI_API_KEY="your_kimi_api_key"
export PERPLEXITY_API_KEY="your_perplexity_api_key"
export MISTRAL_API_KEY="your_mistral_api_key"- OpenAI: https://platform.openai.com/api-keys
- Anthropic: https://console.anthropic.com/
- DeepSeek: https://platform.deepseek.com/
- Google Gemini: https://makersuite.google.com/app/apikey
- xAI Grok: https://console.x.ai/
- Moonshot AI: https://platform.moonshot.ai/
- Perplexity: https://www.perplexity.ai/hub
- Mistral: https://console.mistral.ai/
Get a response from OpenAI:
{
"tool": "call-chatgpt",
"arguments": {
"prompt": "Explain quantum computing in simple terms",
"temperature": 0.7,
"max_tokens": 500
}
}Get responses from all providers:
{
"tool": "call-all-llms",
"arguments": {
"prompt": "Write a short poem about AI",
"temperature": 0.8
}
}Automatically use the best model for each task type:
{
"tool": "set-user-preferences",
"arguments": {
"defaultModel": "gpt-4o",
"costPreference": "cheaper",
"tagPreferences": {
"coding": "deepseek-r1",
"general": "gpt-4o",
"business": "claude-3.5-sonnet-20241022",
"reasoning": "deepseek-r1",
"math": "deepseek-r1",
"creative": "gpt-4o"
}
}
}View your prompt logs:
{
"tool": "get-prompt-history",
"arguments": {
"provider": "chatgpt",
"limit": 10
}
}Models are tagged by their strengths:
- coding:
deepseek-r1,deepseek-coder,gpt-4o,claude-3.5-sonnet-20241022 - business:
claude-3-opus-20240229,gpt-4o,gemini-1.5-pro - reasoning:
deepseek-r1,o1-preview,claude-3.5-sonnet-20241022 - math:
deepseek-r1,o1-preview,o1-mini - creative:
gpt-4o,claude-3-opus-20240229,gemini-1.5-pro - general:
gpt-4o-mini,claude-3-haiku-20240307,gemini-1.5-flash
- Multi-Perspective Analysis β Get different perspectives from multiple LLMs
- Model Comparison β Compare responses to understand strengths and weaknesses
- Cost Optimization β Choose the most cost-effective model for each task
- Quality Assurance β Cross-reference responses from multiple models
- Intelligent Selection β Automatically use the best model for coding, business, reasoning, etc.
- Prompt Analytics β Track usage, costs, and patterns with automatic logging
Built with: Node.js, TypeScript, MCP SDK
Dependencies: @modelcontextprotocol/sdk, superagent, zod
Platforms: macOS, Windows, Linux
Preference Storage:
- Unix/macOS:
~/.cross-llm-mcp/preferences.json - Windows:
%APPDATA%/cross-llm-mcp/preferences.json
Prompt Log Storage:
- Unix/macOS:
~/.cross-llm-mcp/prompts.json - Windows:
%APPDATA%/cross-llm-mcp/prompts.json
β If this project helps you, please star it on GitHub! β
Contributions welcome! Please open an issue or submit a pull request.
MIT License β see LICENSE.md for details.
If you find this project useful, consider supporting it:
β‘ Lightning Network
lnbc1pjhhsqepp5mjgwnvg0z53shm22hfe9us289lnaqkwv8rn2s0rtekg5vvj56xnqdqqcqzzsxqyz5vqsp5gu6vh9hyp94c7t3tkpqrp2r059t4vrw7ps78a4n0a2u52678c7yq9qyyssq7zcferywka50wcy75skjfrdrk930cuyx24rg55cwfuzxs49rc9c53mpz6zug5y2544pt8y9jflnq0ltlha26ed846jh0y7n4gm8jd3qqaautqa
βΏ Bitcoin: bc1ptzvr93pn959xq4et6sqzpfnkk2args22ewv5u2th4ps7hshfaqrshe0xtp
Ξ Ethereum/EVM: 0x42ea529282DDE0AA87B42d9E83316eb23FE62c3f