Skip to content

feat: surface GLM-backed Codex and Claude runtimes#1823

Draft
Marve10s wants to merge 5 commits intopingdotgg:mainfrom
Marve10s:feat/glm-provider
Draft

feat: surface GLM-backed Codex and Claude runtimes#1823
Marve10s wants to merge 5 commits intopingdotgg:mainfrom
Marve10s:feat/glm-provider

Conversation

@Marve10s
Copy link
Copy Markdown
Contributor

@Marve10s Marve10s commented Apr 7, 2026

Summary

  • stops treating GLM as a fake top-level provider and instead surfaces GLM-backed Codex and Claude runtimes
  • detects Codex custom model provider config and Claude Z.ai Anthropic-compatible config, then reflects that setup in status labels and model lists
  • removes the unfinished GLM bridge/adapter stack that was not delivering unique value over existing Codex or Claude integrations

What changed

  • CodexProvider now reads model_provider from Codex config, shows Codex / GLM when applicable, skips OpenAI login checks for custom backends, and swaps built-in suggestions to GLM models
  • ClaudeProvider now detects Z.ai's Anthropic-compatible Claude setup from env or ~/.claude/settings.json, shows Claude / GLM, and maps Claude tiers to configured GLM models
  • provider snapshots now expose an optional displayName, and the web picker, banners, chat labels, and settings cards use that live runtime/backend label
  • removed the standalone glm provider, GLM bridge, GLM adapter, GLM settings surface, and related routing/contracts that implied a third runtime

User impact

  • users who already point Codex at GLM now see Codex / GLM and GLM model suggestions instead of generic Codex/OpenAI labeling
  • users who run Claude Code against Z.ai now see Claude / GLM and the mapped GLM models instead of a misleading Claude-only surface
  • T3Code now adds value through accurate detection and UX instead of duplicating functionality users already have in their agent configs

Validation

  • bun fmt
  • bun lint
  • bun typecheck
  • cd apps/server && bun run test src/provider/Layers/ProviderRegistry.test.ts src/provider/Layers/ProviderAdapterRegistry.test.ts
  • browser smoke test with isolated temp HOME, CODEX_HOME, and T3CODE_HOME plus fake GLM-backed configs for Codex and Claude

Add GLM as a Codex-backed provider that routes through a local
Responses-to-ChatCompletions bridge. GLM sessions reuse the Codex
app-server runtime while presenting as a separate provider in the UI.

Contracts: add "glm" to ProviderKind, ModelSelection, GlmSettings,
and all Record<ProviderKind, ...> exhaustiveness sites.

Server: GlmAdapter (thin CodexAdapter delegate), GlmProvider (snapshot
service checking GLM_API_KEY), GLM bridge (loopback HTTP translating
Responses <-> Chat Completions), shared codexLaunchConfig builder,
text generation routing for GLM.

Web: GLM in provider picker, settings panel with env var hint,
composer registry entry, model selection config, GlmIcon.

Tests: 40 new tests covering bridge translation (Responses -> Chat
Completions, Chat Completions streaming -> Responses SSE), launch
config builder, and updated existing tests for the new provider.
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 7, 2026

Important

Review skipped

Auto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 708e942a-240b-4272-a129-a81b4520608c

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions bot added size:XXL 1,000+ changed lines (additions + deletions). vouch:unvouched PR author is not yet trusted in the VOUCHED list. labels Apr 7, 2026
@Marve10s
Copy link
Copy Markdown
Contributor Author

Marve10s commented Apr 7, 2026

Yes, I have lot of motivation to integrate GLM plans into Codex ( they work with CC as well btw ).
Does someone has a sub to test? Mine expired..

- makeGlmAdapterLive now accepts and plumbs options parameter
- Documented that GLM runtime events flow through the Codex adapter
  stream with provider="codex" — event re-attribution by
  ProviderService based on session directory is a follow-up
@Marve10s
Copy link
Copy Markdown
Contributor Author

Marve10s commented Apr 7, 2026

Addressed both review comments in aaeaf5b:

1. makeGlmAdapterLive ignoring options (Low) — Fixed. The factory now accepts and plumbs the options parameter through to the adapter constructor.

2. Empty streamEvents (Medium) — Valid observation. GLM events flow through the Codex adapter's stream since GLM delegates all operations to Codex. The GLM adapter's own streamEvents queue is currently empty. This is a known limitation of the thin-delegate architecture: the Codex adapter's stream is a Queue-backed Stream that can only be consumed once (by ProviderService), so the GLM adapter cannot also tap into it.

The correct follow-up is to add event re-attribution in ProviderService.processRuntimeEvent based on the ProviderSessionDirectory binding — when an event arrives for a threadId that's bound to "glm", remap event.provider before publishing. This is tracked in the "What's not done yet" checklist.

@AmoonPod
Copy link
Copy Markdown

AmoonPod commented Apr 9, 2026

Yes, I have lot of motivation to integrate GLM plans into Codex ( they work with CC as well btw ). Does someone has a sub to test? Mine expired..

I've been using my GLM coding plan with Claude in t3code.

@github-actions github-actions bot added size:L 100-499 changed lines (additions + deletions). and removed size:XXL 1,000+ changed lines (additions + deletions). labels Apr 9, 2026
@Marve10s Marve10s changed the title feat: add GLM (Z.ai) as a third provider feat: surface GLM-backed Codex and Claude runtimes Apr 9, 2026
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 High

if (parsedVersion && !isCodexCliVersionSupported(parsedVersion)) {
return buildServerProvider({
provider: PROVIDER,
enabled: codexSettings.enabled,
checkedAt,
models,
probe: {
installed: true,
version: parsedVersion,
status: "error",
auth: { status: "unknown" },
message: formatCodexCliUpgradeMessage(parsedVersion),
},
});
}

The buildServerProvider call for unsupported CLI versions (lines 539-552) omits displayName, so users with a GLM model provider see the generic name "Codex" instead of "Codex / GLM" in the error state. This is inconsistent with all other buildServerProvider calls in this function, which include displayName. Consider adding displayName to this call to ensure the provider name is consistent across all status scenarios.

-  if (parsedVersion && !isCodexCliVersionSupported(parsedVersion)) {
+  if (parsedVersion && !isCodexCliVersionSupported(parsedVersion)) {
     return buildServerProvider({
       provider: PROVIDER,
       enabled: codexSettings.enabled,
       checkedAt,
       models,
+      displayName,
       probe: {
         installed: true,
         version: parsedVersion,
         status: "error",
         auth: { status: "unknown" },
         message: formatCodexCliUpgradeMessage(parsedVersion),
       },
     });
   }
🤖 Copy this AI Prompt to have your agent fix this:
In file apps/server/src/provider/Layers/CodexProvider.ts around lines 538-552:

The `buildServerProvider` call for unsupported CLI versions (lines 539-552) omits `displayName`, so users with a GLM model provider see the generic name "Codex" instead of "Codex / GLM" in the error state. This is inconsistent with all other `buildServerProvider` calls in this function, which include `displayName`. Consider adding `displayName` to this call to ensure the provider name is consistent across all status scenarios.

Evidence trail:
apps/server/src/provider/Layers/CodexProvider.ts:369-377 (codexDisplayName function showing 'Codex / GLM' for GLM providers), apps/server/src/provider/Layers/CodexProvider.ts:447 (displayName computed), apps/server/src/provider/Layers/CodexProvider.ts:539-551 (buildServerProvider call for unsupported CLI version - missing displayName), apps/server/src/provider/Layers/CodexProvider.ts:456-468,479-494,497-509,519-535,554-567,584-601,603-617,621-638 (other buildServerProvider calls that all include displayName)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:L 100-499 changed lines (additions + deletions). vouch:unvouched PR author is not yet trusted in the VOUCHED list.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants