⚠ This page is served via a proxy. Original site: https://github.com
This service does not collect credentials or authentication data.
Skip to content

Conversation

@caozhiyuan
Copy link
Contributor

This pull request introduces significant updates to the translation layer between OpenAI and Anthropic message formats, especially around supporting "thinking" blocks for Claude models, and improves protocol compliance for interleaved thinking. It also updates API versioning and various headers, and brings in several bug fixes and refactors for message streaming and token counting.

Anthropic "thinking" protocol and translation improvements:

  • Added support for "thinking" blocks in Claude model translations, including handling of reasoning_text and reasoning_opaque fields, filtering by signature, and ensuring protocol compliance by injecting reminders and requirements into system prompts. [1] [2] [3] [4] [5] [6] [7] [8]
  • Enhanced translation from OpenAI to Anthropic by reconstructing "thinking" blocks from OpenAI's reasoning_text and reasoning_opaque fields, and ensuring proper block ordering in responses. [1] [2] [3]
  • Improved mapping of system prompts and user messages to enforce Claude's interleaved thinking protocol, including reminders and minimum reasoning requirements.

API and protocol versioning:

  • Updated Copilot and API version constants to use 0.35.0 and 2025-10-01 respectively, and changed the openai-intent header to "conversation-agent". [1] [2]

Streaming and state management enhancements:

  • Refactored streaming translation logic to modularize event handling, added support for thinkingBlockOpen state, and ensured correct block closing and event emission for thinking and tool blocks. [1] [2] [3] [4] [5]

Token counting and content mapping fixes:

  • Updated token counting logic to skip the reasoning_opaque field, ensuring accurate token usage reporting.
  • Refined content mapping to exclude thinking blocks from user-visible content except where required, and improved image block handling. [1] [2] [3]

Type and import updates:

  • Added and updated types for model capabilities and Anthropic message blocks, including signature and thinkingBlockOpen fields. [1] [2] [3]

These changes collectively improve Claude model support, ensure protocol compliance, and enhance translation reliability between OpenAI and Anthropic message formats.

caozhiyuan and others added 17 commits November 19, 2025 21:32
…g order when stream=false and exclude reasoning_opaque from token calculation in calculateMessageTokens
When account type is not specified or set to 'individual', use the default
api.githubcopilot.com URL instead of constructing a subdomain-based URL.

This restores previous behavior where business users could work without
explicitly specifying their account type, as the default URL works for both
individual and business accounts.

Only constructs account-type-specific URLs (api.business.githubcopilot.com,
api.enterprise.githubcopilot.com) when those account types are explicitly
specified.
fix: use default API URL when account type is individual
@caozhiyuan
Copy link
Contributor Author

caozhiyuan commented Jan 8, 2026

Note: Currently, please configure claude code settings.json according to the following link https://github.com/ericc-ch/copilot-api?tab=readme-ov-file#manual-configuration-with-settingsjson .

{
  "$schema": "https://json.schemastore.org/claude-code-settings.json",
  "env": {
    "ANTHROPIC_BASE_URL": "http://localhost:4141",
    "ANTHROPIC_AUTH_TOKEN": "dummy",
    "ANTHROPIC_MODEL": "xxxxx",
    "ANTHROPIC_DEFAULT_SONNET_MODEL": "xxxxx",
    "ANTHROPIC_DEFAULT_HAIKU_MODEL": "gpt-5-mini",
    "CLAUDE_CODE_SUBAGENT_MODEL": "gpt-5-mini",
    "DISABLE_NON_ESSENTIAL_MODEL_CALLS": "1",
    "CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1",
    "BASH_MAX_TIMEOUT_MS": "600000"
  },
  "permissions": {
    "allow": [
      "mcp__fetch__fetch",
      "Bash(gh pr:*)",
      "Bash(gh status:*)",
      "Bash(gh repo view:*)",
      "Bash(node -e:*)"
    ],
    "deny": [
      "WebSearch"
    ]
  },
  "alwaysThinkingEnabled": true
}

@caozhiyuan
Copy link
Contributor Author

caozhiyuan commented Jan 9, 2026

if run in opencode , .config opencode opencode.json samples (if not use github copilot provider , add a custom local provider):

{
	"$schema": "https://opencode.ai/config.json",
	"model": "local/gpt-5.2",
	"small_model": "local/gpt-5-mini",
	"agent": {
		"build": {
			"model": "local/gpt-5.2"
		},
		"plan": {
			"model": "local/gpt-5.2"
		},
		"explore": {
			"model": "local/gpt-5-mini"
		}
	},
	"mcp": {
		"fetch": {
			"type": "local",
			"command": ["uvx", "mcp-server-fetch", "--ignore-robots-txt"],
			"enabled": true
		}
	},
	"provider": {
		"local": {
			"npm": "@ai-sdk/anthropic",
			"name": "My Local",
			"options": {
				"baseURL": "http://localhost:4141/v1",
				"apiKey": "dummy"
			},
			"models": {
				"gpt-5.2": {
					"name": "gpt-5.2",
					"limit": {
						"context": 200000,
						"output": 64000
					}
				},
				"gpt-5-mini": {
					"name": "gpt-5-mini",
					"limit": {
						"context": 128000,
						"output": 64000
					}
				},
				"claude-sonnet-4.5": {
					"id": "claude-sonnet-4.5",
					"name": "claude-sonnet-4.5",
					"limit": {
						"context": 128000,
						"output": 16000
					},
					"options": {
						"thinking": {
							"type": "enabled",
							"budgetTokens": 15999
						}
					}
				}
			}
		}
	}
}

@Arminova
Copy link

  		"claude-sonnet-4.5": {
  			"id": "claude-sonnet-4.5",
  			"name": "claude-sonnet-4.5",
  			"limit": {
  				"context": 200000,
  				"output": 16000
  			},
  			"options": {
  				"thinking": {
  					"type": "enabled",
  					"budgetTokens": 16000
  				}
  			}
  		}

why 200000 context ? it must be 128K, none of Copilot models support 200000

image

@caozhiyuan
Copy link
Contributor Author

caozhiyuan commented Jan 10, 2026

@Arminova copy mistake, i changed. gpt seems support max 200k context.
26fd6d33-26bf-4e43-88dd-e2c8ed0dd746

@22Goose
Copy link

22Goose commented Jan 11, 2026

Hi, have you tested the thinking mode on Opus 4.5?

 ERROR  Failed to create chat completions Response {}                                                                                                                                                                                     10:20:06 PM


 ERROR  Error occurred: Failed to create chat completions                                                                                                                                                                                 10:20:06 PM

    at <anonymous> (src/services/copilot/create-chat-completions.ts:39:15)
    at async <anonymous> (src/routes/messages/handler.ts:82:26)
    at async handleCompletion (src/routes/messages/handler.ts:67:16)
    at async <anonymous> (src/routes/messages/route.ts:12:18)
    at async dispatch (node_modules/hono/dist/compose.js:22:23)
    at async cors2 (node_modules/hono/dist/middleware/cors/index.js:84:11)
    at async dispatch (node_modules/hono/dist/compose.js:22:23)
    at async logger2 (node_modules/hono/dist/middleware/logger/index.js:38:11)
    at async dispatch (node_modules/hono/dist/compose.js:22:23)
    at async <anonymous> (node_modules/hono/dist/hono-base.js:201:31)
    at processTicksAndRejections (native:7:39)


 ERROR  HTTP error: { error:                                                                                                                                                                                                              10:20:06 PM
   { message: 'The requested model is not supported.',
     code: 'model_not_supported',
     param: 'model',
     type: 'invalid_request_error' } }

I got this error.

@caozhiyuan
Copy link
Contributor Author

@22Goose use https://github.com/caozhiyuan/copilot-api/tree/all , run bun run start start -v, and see logs in user .local\share\copilot-api\logs

@22Goose
Copy link

22Goose commented Jan 11, 2026

.local\share\copilot-api\logs

Besides, I noticed some discussion on the copilot may not support the thinking model of Opus 4.5. I don't know whether we can use the copilot api in this way.

@caozhiyuan
Copy link
Contributor Author

caozhiyuan commented Jan 11, 2026

Note: Currently, please configure claude code settings.json according to the following link https://github.com/ericc-ch/copilot-api?tab=readme-ov-file#manual-configuration-with-settingsjson .

{
  "$schema": "https://json.schemastore.org/claude-code-settings.json",
  "env": {
    "ANTHROPIC_BASE_URL": "http://localhost:4141",
    "ANTHROPIC_AUTH_TOKEN": "dummy",
    "ANTHROPIC_MODEL": "xxxxx",
    "ANTHROPIC_DEFAULT_SONNET_MODEL": "xxxxx",
    "ANTHROPIC_DEFAULT_HAIKU_MODEL": "gpt-5-mini",
    "CLAUDE_CODE_SUBAGENT_MODEL": "gpt-5-mini",
    "DISABLE_NON_ESSENTIAL_MODEL_CALLS": "1",
    "CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1",
    "BASH_MAX_TIMEOUT_MS": "600000"
  },
  "permissions": {
    "allow": [
      "mcp__fetch__fetch",
      "Bash(gh pr:*)",
      "Bash(gh status:*)",
      "Bash(gh repo view:*)",
      "Bash(node -e:*)"
    ],
    "deny": [
      "WebSearch"
    ]
  },
  "alwaysThinkingEnabled": true
}

@22Goose change user .claude/settings.json like this. and start a new conversation

@caozhiyuan
Copy link
Contributor Author

caozhiyuan commented Jan 11, 2026

Currently, based on my usage, gpt-5.2 is stronger than other models; I mainly use it for backend complex logic development @22Goose in my usage, ANTHROPIC_MODEL ANTHROPIC_DEFAULT_SONNET_MODEL all set gpt-5.2 (use responeses api high think efforts)

@22Goose
Copy link

22Goose commented Jan 11, 2026

Currently, based on my usage, gpt-5.2 is stronger than other models; I mainly use it for backend complex logic development @22Goose in my usage, ANTHROPIC_MODEL ANTHROPIC_DEFAULT_SONNET_MODEL all set gpt-5.2 (use responeses api high think efforts)

Thanks for advise. TBH, I also mostly use gpt-5.2. I just want to test the Opus 4.5 due to Boris Cherny's interview. Just try to figure out which one is better.

@22Goose
Copy link

22Goose commented Jan 11, 2026

Note: Currently, please configure claude code settings.json according to the following link https://github.com/ericc-ch/copilot-api?tab=readme-ov-file#manual-configuration-with-settingsjson .
@22Goose change user .claude/settings.json like this. and start a new conversation

I change the setting, but the outcomes are quite the same.

@caozhiyuan
Copy link
Contributor Author

@22Goose bun run start start -v can see model names,You entered the wrong model name

D GitHub Copilot Token fetched successfully!                                                                
i Available models:                                                                                         
- gpt-5-mini
- gpt-5
- gpt-4o-mini-2024-07-18
- gpt-4o-2024-11-20
- gpt-4o-2024-08-06
- grok-code-fast-1
- gpt-5.1
- gpt-5.1-codex
- gpt-5.1-codex-mini
- gpt-5.1-codex-max
- gpt-5-codex
- text-embedding-3-small
- text-embedding-3-small-inference
- claude-sonnet-4
- claude-sonnet-4.5
- claude-opus-4.5
- claude-haiku-4.5
- gemini-3-pro-preview
- gemini-3-flash-preview
- gemini-2.5-pro
- gpt-4.1-2025-04-14
- oswe-vscode-prime
- oswe-vscode-secondary
- gpt-5.2
- gpt-41-copilot
- gpt-3.5-turbo-0613
- gpt-4
- gpt-4-0613
- gpt-4-0125-preview
- gpt-4o-2024-05-13
- gpt-4-o-preview
- gpt-4.1
- gpt-3.5-turbo
- gpt-4o-mini
- gpt-4
- gpt-4o
- gpt-4-o-preview
- text-embedding-ada-002

@22Goose
Copy link

22Goose commented Jan 12, 2026

@caozhiyuan I may find out where goes wrong.
tl,dr: directly change the model by using /model claude-opus-4.5 in CC for VS Code extension may let this error

Here is my setting:


{
  "$schema": "https://json.schemastore.org/claude-code-settings.json",
  "env": {
    "ANTHROPIC_BASE_URL": "http://localhost:4141",
    "ANTHROPIC_AUTH_TOKEN": "dummy",
    "ANTHROPIC_MODEL": "claude-opus-4.5",
    "ANTHROPIC_SMALL_FAST_MODEL": "claude-sonnet-4.5",
    "DISABLE_NON_ESSENTIAL_MODEL_CALLS": "1",
    "CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1"
  },
  "permissions": {
    "allow": [
      "mcp__fetch__fetch",
      "Bash(gh pr:*)",
      "Bash(gh status:*)",
      "Bash(gh repo view:*)",
      "Bash(node -e:*)"
    ],
    "deny": [
      "WebSearch"
    ]
  },
  "alwaysThinkingEnabled": true
}


In CC Cli:
The "ANTHROPIC_MODEL": "claude-opus-4.5" will take effect. No error.
The /model claude-opus-4.5 will take effect. No error.
The /model opus 4.5 will bring the error.

In CC VS Code extension:
The "ANTHROPIC_MODEL": "claude-opus-4.5" will not take effect. Sonnet 4 is in use. No error.
The /model claude-opus-4.5 will not take effect, which also bring the error.
The /model opus 4.5 will bring the error.

I get upper error log basically when I working with CC VS Code extension.

@caozhiyuan
Copy link
Contributor Author

@caozhiyuan I may find out where goes wrong. tl,dr: directly change the model by using /model claude-opus-4.5 in CC for VS Code extension may let this error

Here is my setting:


{
  "$schema": "https://json.schemastore.org/claude-code-settings.json",
  "env": {
    "ANTHROPIC_BASE_URL": "http://localhost:4141",
    "ANTHROPIC_AUTH_TOKEN": "dummy",
    "ANTHROPIC_MODEL": "claude-opus-4.5",
    "ANTHROPIC_SMALL_FAST_MODEL": "claude-sonnet-4.5",
    "DISABLE_NON_ESSENTIAL_MODEL_CALLS": "1",
    "CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1"
  },
  "permissions": {
    "allow": [
      "mcp__fetch__fetch",
      "Bash(gh pr:*)",
      "Bash(gh status:*)",
      "Bash(gh repo view:*)",
      "Bash(node -e:*)"
    ],
    "deny": [
      "WebSearch"
    ]
  },
  "alwaysThinkingEnabled": true
}

In CC Cli: The "ANTHROPIC_MODEL": "claude-opus-4.5" will take effect. No error. The /model claude-opus-4.5 will take effect. No error. The /model opus 4.5 will bring the error.

In CC VS Code extension: The "ANTHROPIC_MODEL": "claude-opus-4.5" will not take effect. Sonnet 4 is in use. No error. The /model claude-opus-4.5 will not take effect, which also bring the error. The /model opus 4.5 will bring the error.

I get upper error log basically when I working with CC VS Code extension.

@22Goose this all need addd
"ANTHROPIC_MODEL": "xxxxx",
"ANTHROPIC_DEFAULT_SONNET_MODEL": "xxxxx",
"ANTHROPIC_DEFAULT_HAIKU_MODEL": "gpt-5-mini",
"CLAUDE_CODE_SUBAGENT_MODEL": "gpt-5-mini",
"DISABLE_NON_ESSENTIAL_MODEL_CALLS": "1",
"CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1"
and CLAUDE_CODE_SUBAGENT_MODEL / ANTHROPIC_DEFAULT_HAIKU_MODEL must gpt-5-mini, otherwise cloud code will waste you Premium requests a lot . you can config ANTHROPIC_MODEL ANTHROPIC_DEFAULT_SONNET_MODEL = claude-sonnet-4.5. ANTHROPIC_DEFAULT_OPUS_MODEL = claude-opus-4.5 please read https://code.claude.com/docs/en/model-config

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants