Skip to content

Commit e75d6a5

Browse files
authored
Merge pull request #65 from ndycode/feat/gpt-5.4-support-v5.4.2
feat: add GPT-5.4 support and docs refresh
2 parents a791d03 + 2228e77 commit e75d6a5

27 files changed

+1652
-129
lines changed

CHANGELOG.md

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,23 @@ all notable changes to this project. dates are ISO format (YYYY-MM-DD).
2525
- **doctor safe-fix edge path**: `codex-doctor fix` now reports a clear non-crashing message when no eligible account is available for auto-switch.
2626
- **first-time import flow**: `codex-import` no longer fails with `No accounts to export` when storage is empty; pre-import backup is skipped cleanly in zero-account setups.
2727

28+
## [5.4.2] - 2026-03-05
29+
30+
### added
31+
32+
- **gpt-5.4 + gpt-5.4-pro runtime support**: added model-map normalization and request-transform coverage for `gpt-5.4` (general) and optional `gpt-5.4-pro`.
33+
- **gpt-5.4-pro fallback edge**: default unsupported-model fallback chain now includes `gpt-5.4-pro -> gpt-5.4` when fallback policy is enabled.
34+
35+
### changed
36+
37+
- **template defaults updated to gpt-5.4**: modern + legacy config templates now use `gpt-5.4` variants as the default general-purpose family.
38+
- **docs refresh for 5.4 rollout**: README, getting-started, configuration, troubleshooting, docs index, and config docs now reflect `gpt-5.4` defaults and optional `gpt-5.4-pro` usage.
39+
- **test matrix expanded for 5.4**: unit, integration, and property tests now explicitly cover `gpt-5.4` and `gpt-5.4-pro` normalization/reasoning/fallback paths.
40+
41+
### fixed
42+
43+
- **quota probe model order**: quota snapshot probing now includes `gpt-5.4` first before legacy Codex probe models.
44+
2845
## [5.4.0] - 2026-02-28
2946

3047
### changed

README.md

Lines changed: 16 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -3,14 +3,14 @@
33
[![npm version](https://img.shields.io/npm/v/oc-chatgpt-multi-auth.svg)](https://www.npmjs.com/package/oc-chatgpt-multi-auth)
44
[![npm downloads](https://img.shields.io/npm/dw/oc-chatgpt-multi-auth.svg)](https://www.npmjs.com/package/oc-chatgpt-multi-auth)
55

6-
OAuth plugin for OpenCode that lets you use ChatGPT Plus/Pro rate limits with models like `gpt-5.2`, `gpt-5-codex`, and `gpt-5.1-codex-max` (plus optional entitlement-gated Spark IDs and legacy Codex aliases).
6+
OAuth plugin for OpenCode that lets you use ChatGPT Plus/Pro rate limits with models like `gpt-5.4`, `gpt-5-codex`, and `gpt-5.1-codex-max` (plus optional/manual `gpt-5.4-pro`, entitlement-gated Spark IDs, and legacy Codex aliases).
77

88
> [!NOTE]
99
> **Renamed from `opencode-openai-codex-auth-multi`** — If you were using the old package, update your config to use `oc-chatgpt-multi-auth` instead. The rename was necessary because OpenCode blocks plugins containing `opencode-openai-codex-auth` in the name.
1010
1111
## What You Get
1212

13-
- **GPT-5.2, GPT-5 Codex, GPT-5.1 Codex Max** and all GPT-5.x variants via ChatGPT OAuth
13+
- **GPT-5.4, GPT-5 Codex, GPT-5.1 Codex Max** and all GPT-5.x variants via ChatGPT OAuth
1414
- **Multi-account support** — Add up to 20 ChatGPT accounts, health-aware rotation with automatic failover
1515
- **Per-project accounts** — Each project gets its own account storage (new in v4.10.0)
1616
- **Workspace-aware identity persistence** — Keeps workspace/org identity stable across token refresh and verify-flagged restore flows
@@ -91,7 +91,7 @@ This writes the config to `~/.config/opencode/opencode.json`, backs up existing
9191
4. **Use it:**
9292

9393
```bash
94-
opencode run "Hello" --model=openai/gpt-5.2 --variant=medium
94+
opencode run "Hello" --model=openai/gpt-5.4 --variant=medium
9595
```
9696

9797
</details>
@@ -119,7 +119,7 @@ This writes the config to `~/.config/opencode/opencode.json`, backs up existing
119119
### Verification
120120

121121
```bash
122-
opencode run "Hello" --model=openai/gpt-5.2 --variant=medium
122+
opencode run "Hello" --model=openai/gpt-5.4 --variant=medium
123123
```
124124

125125
</details>
@@ -132,7 +132,8 @@ opencode run "Hello" --model=openai/gpt-5.2 --variant=medium
132132

133133
| Model | Variants | Notes |
134134
|-------|----------|-------|
135-
| `gpt-5.2` | none, low, medium, high, xhigh | Latest GPT-5.2 with reasoning levels |
135+
| `gpt-5.4` | none, low, medium, high, xhigh | Latest GPT-5.4 with reasoning levels |
136+
| `gpt-5.4-pro` | low, medium, high, xhigh | Optional manual model for deeper reasoning; fallback default is `gpt-5.4-pro -> gpt-5.4` |
136137
| `gpt-5-codex` | low, medium, high | Canonical Codex model for code generation (default: high) |
137138
| `gpt-5.3-codex-spark` | low, medium, high, xhigh | Spark IDs are supported by the plugin, but access is entitlement-gated by account/workspace |
138139
| `gpt-5.1-codex-max` | low, medium, high, xhigh | Maximum context Codex |
@@ -145,10 +146,10 @@ Config templates intentionally omit Spark model IDs by default to reduce entitle
145146
**Using variants:**
146147
```bash
147148
# Modern OpenCode (v1.0.210+)
148-
opencode run "Hello" --model=openai/gpt-5.2 --variant=high
149+
opencode run "Hello" --model=openai/gpt-5.4 --variant=high
149150

150151
# Legacy OpenCode (v1.0.209 and below)
151-
opencode run "Hello" --model=openai/gpt-5.2-high
152+
opencode run "Hello" --model=openai/gpt-5.4-high
152153
```
153154

154155
<details>
@@ -170,8 +171,8 @@ Add this to your `~/.config/opencode/opencode.json`:
170171
"store": false
171172
},
172173
"models": {
173-
"gpt-5.2": {
174-
"name": "GPT 5.2 (OAuth)",
174+
"gpt-5.4": {
175+
"name": "GPT 5.4 (OAuth)",
175176
"limit": { "context": 272000, "output": 128000 },
176177
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] },
177178
"variants": {
@@ -258,7 +259,7 @@ Optional Spark model block (manual add only when entitled):
258259
}
259260
```
260261

261-
For legacy OpenCode (v1.0.209 and below), use `config/opencode-legacy.json` which has individual model entries like `gpt-5.2-low`, `gpt-5.2-medium`, etc.
262+
For legacy OpenCode (v1.0.209 and below), use `config/opencode-legacy.json` which has individual model entries like `gpt-5.4-low`, `gpt-5.4-medium`, etc.
262263

263264
</details>
264265

@@ -661,15 +662,15 @@ OpenCode uses `~/.config/opencode/` on **all platforms** including Windows.
661662
1. Use `openai/` prefix:
662663
```bash
663664
# Correct
664-
--model=openai/gpt-5.2
665+
--model=openai/gpt-5.4
665666

666667
# Wrong
667-
--model=gpt-5.2
668+
--model=gpt-5.4
668669
```
669670

670671
2. Verify model is in your config:
671672
```json
672-
{ "models": { "gpt-5.2": { ... } } }
673+
{ "models": { "gpt-5.4": { ... } } }
673674
```
674675

675676
</details>
@@ -697,6 +698,7 @@ OpenCode uses `~/.config/opencode/` on **all platforms** including Windows.
697698
"unsupportedCodexPolicy": "fallback",
698699
"fallbackOnUnsupportedCodexModel": true,
699700
"unsupportedCodexFallbackChain": {
701+
"gpt-5.4-pro": ["gpt-5.4"],
700702
"gpt-5-codex": ["gpt-5.2-codex"],
701703
"gpt-5.3-codex": ["gpt-5-codex", "gpt-5.2-codex"],
702704
"gpt-5.3-codex-spark": ["gpt-5-codex", "gpt-5.3-codex", "gpt-5.2-codex"]
@@ -843,6 +845,7 @@ Create `~/.opencode/openai-codex-auth-config.json` for optional settings:
843845
| `streamStallTimeoutMs` | `45000` | Abort non-stream parsing if SSE stalls (ms) |
844846

845847
Default unsupported-model fallback chain (used when `unsupportedCodexPolicy` is `fallback`):
848+
- `gpt-5.4-pro -> gpt-5.4` (if `gpt-5.4-pro` is selected manually)
846849
- `gpt-5.3-codex -> gpt-5-codex -> gpt-5.2-codex`
847850
- `gpt-5.3-codex-spark -> gpt-5-codex -> gpt-5.3-codex -> gpt-5.2-codex` (applies if you manually select Spark model IDs)
848851
- `gpt-5.2-codex -> gpt-5-codex`

config/README.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ opencode --version
3434
OpenCode v1.0.210+ added model `variants`, so one model entry can expose multiple reasoning levels. That keeps modern config much smaller while preserving the same effective presets.
3535

3636
Both templates include:
37-
- GPT-5.2, GPT-5 Codex, GPT-5.1, GPT-5.1 Codex, GPT-5.1 Codex Max, GPT-5.1 Codex Mini
37+
- GPT-5.4, GPT-5 Codex, GPT-5.1, GPT-5.1 Codex, GPT-5.1 Codex Max, GPT-5.1 Codex Mini
3838
- Reasoning variants per model family
3939
- `store: false` and `include: ["reasoning.encrypted_content"]`
4040
- Context metadata (272k context / 128k output)
@@ -50,14 +50,14 @@ If your workspace is entitled, you can add Spark model IDs manually.
5050
Modern template (v1.0.210+):
5151

5252
```bash
53-
opencode run "task" --model=openai/gpt-5.2 --variant=medium
53+
opencode run "task" --model=openai/gpt-5.4 --variant=medium
5454
opencode run "task" --model=openai/gpt-5-codex --variant=high
5555
```
5656

5757
Legacy template (v1.0.209 and below):
5858

5959
```bash
60-
opencode run "task" --model=openai/gpt-5.2-medium
60+
opencode run "task" --model=openai/gpt-5.4-medium
6161
opencode run "task" --model=openai/gpt-5-codex-high
6262
```
6363

@@ -71,9 +71,11 @@ Current defaults are strict entitlement handling:
7171
- `unsupportedCodexPolicy: "strict"` returns entitlement errors directly
7272
- set `unsupportedCodexPolicy: "fallback"` (or `CODEX_AUTH_UNSUPPORTED_MODEL_POLICY=fallback`) to enable automatic fallback retries
7373
- `fallbackToGpt52OnUnsupportedGpt53: true` keeps the legacy `gpt-5.3-codex -> gpt-5.2-codex` edge inside fallback mode
74+
- `gpt-5.4-pro -> gpt-5.4` is included by default in fallback mode (relevant only if you add `gpt-5.4-pro` manually)
7475
- `unsupportedCodexFallbackChain` lets you override fallback order per model
7576

7677
Default fallback chain (when policy is `fallback`):
78+
- `gpt-5.4-pro -> gpt-5.4` (if you manually select `gpt-5.4-pro`)
7779
- `gpt-5.3-codex -> gpt-5-codex -> gpt-5.2-codex`
7880
- `gpt-5.3-codex-spark -> gpt-5-codex -> gpt-5.3-codex -> gpt-5.2-codex` (only relevant if Spark IDs are added manually)
7981
- `gpt-5.2-codex -> gpt-5-codex`

config/opencode-legacy.json

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,8 @@
1515
"store": false
1616
},
1717
"models": {
18-
"gpt-5.2-none": {
19-
"name": "GPT 5.2 None (OAuth)",
18+
"gpt-5.4-none": {
19+
"name": "GPT 5.4 None (OAuth)",
2020
"limit": {
2121
"context": 272000,
2222
"output": 128000
@@ -40,8 +40,8 @@
4040
"store": false
4141
}
4242
},
43-
"gpt-5.2-low": {
44-
"name": "GPT 5.2 Low (OAuth)",
43+
"gpt-5.4-low": {
44+
"name": "GPT 5.4 Low (OAuth)",
4545
"limit": {
4646
"context": 272000,
4747
"output": 128000
@@ -65,8 +65,8 @@
6565
"store": false
6666
}
6767
},
68-
"gpt-5.2-medium": {
69-
"name": "GPT 5.2 Medium (OAuth)",
68+
"gpt-5.4-medium": {
69+
"name": "GPT 5.4 Medium (OAuth)",
7070
"limit": {
7171
"context": 272000,
7272
"output": 128000
@@ -90,8 +90,8 @@
9090
"store": false
9191
}
9292
},
93-
"gpt-5.2-high": {
94-
"name": "GPT 5.2 High (OAuth)",
93+
"gpt-5.4-high": {
94+
"name": "GPT 5.4 High (OAuth)",
9595
"limit": {
9696
"context": 272000,
9797
"output": 128000
@@ -115,8 +115,8 @@
115115
"store": false
116116
}
117117
},
118-
"gpt-5.2-xhigh": {
119-
"name": "GPT 5.2 Extra High (OAuth)",
118+
"gpt-5.4-xhigh": {
119+
"name": "GPT 5.4 Extra High (OAuth)",
120120
"limit": {
121121
"context": 272000,
122122
"output": 128000

config/opencode-modern.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,8 @@
1515
"store": false
1616
},
1717
"models": {
18-
"gpt-5.2": {
19-
"name": "GPT 5.2 (OAuth)",
18+
"gpt-5.4": {
19+
"name": "GPT 5.4 (OAuth)",
2020
"limit": {
2121
"context": 272000,
2222
"output": 128000

docs/configuration.md

Lines changed: 20 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,8 @@ controls how much thinking the model does.
3434

3535
| model | supported values |
3636
|-------|------------------|
37-
| `gpt-5.2` | none, low, medium, high, xhigh |
37+
| `gpt-5.4` | none, low, medium, high, xhigh |
38+
| `gpt-5.4-pro` | low, medium, high, xhigh (optional/manual model) |
3839
| `gpt-5-codex` | low, medium, high (default: high) |
3940
| `gpt-5.3-codex` | low, medium, high, xhigh (legacy alias to `gpt-5-codex`) |
4041
| `gpt-5.3-codex-spark` | low, medium, high, xhigh (entitlement-gated legacy alias; add manually) |
@@ -44,14 +45,14 @@ controls how much thinking the model does.
4445
| `gpt-5.1-codex-mini` | medium, high |
4546
| `gpt-5.1` | none, low, medium, high |
4647

47-
the shipped config templates include 21 presets and do not add Spark by default. add `gpt-5.3-codex-spark` manually only for entitled workspaces.
48+
the shipped config templates include 21 presets and do not add optional IDs by default. add `gpt-5.4-pro` and/or `gpt-5.3-codex-spark` manually only for entitled workspaces.
4849

4950
what they mean:
50-
- `none` - no reasoning phase (base models only, auto-converts to `low` for codex)
51+
- `none` - no reasoning phase (base models only; auto-converts to `low` for codex/pro families, including `gpt-5-codex` and `gpt-5.4-pro`)
5152
- `low` - light reasoning, fastest
5253
- `medium` - balanced (default)
5354
- `high` - deep reasoning
54-
- `xhigh` - max depth for complex tasks (default for legacy `gpt-5.3-codex` / `gpt-5.2-codex` aliases and `gpt-5.1-codex-max`)
55+
- `xhigh` - max depth for complex tasks (default for legacy `gpt-5.3-codex` / `gpt-5.2-codex` aliases and `gpt-5.1-codex-max`; available for `gpt-5.4` and optional `gpt-5.4-pro`)
5556

5657
### reasoningSummary
5758

@@ -117,6 +118,7 @@ advanced settings go in `~/.opencode/openai-codex-auth-config.json`:
117118
"fallbackOnUnsupportedCodexModel": false,
118119
"fallbackToGpt52OnUnsupportedGpt53": true,
119120
"unsupportedCodexFallbackChain": {
121+
"gpt-5.4-pro": ["gpt-5.4"],
120122
"gpt-5-codex": ["gpt-5.2-codex"]
121123
}
122124
}
@@ -147,7 +149,7 @@ The sample above intentionally sets `"retryAllAccountsMaxRetries": 3` as a bound
147149
| `unsupportedCodexPolicy` | `strict` | unsupported-model behavior: `strict` (return entitlement error) or `fallback` (retry with configured fallback chain) |
148150
| `fallbackOnUnsupportedCodexModel` | `false` | legacy fallback toggle mapped to `unsupportedCodexPolicy` (prefer using `unsupportedCodexPolicy`) |
149151
| `fallbackToGpt52OnUnsupportedGpt53` | `true` | legacy compatibility toggle for the `gpt-5.3-codex -> gpt-5.2-codex` edge when generic fallback is enabled |
150-
| `unsupportedCodexFallbackChain` | `{}` | optional per-model fallback-chain override (map of `model -> [fallback1, fallback2, ...]`) |
152+
| `unsupportedCodexFallbackChain` | `{}` | optional per-model fallback-chain override (map of `model -> [fallback1, fallback2, ...]`; default includes `gpt-5.4-pro -> gpt-5.4`) |
151153
| `sessionRecovery` | `true` | auto-recover from common api errors |
152154
| `autoResume` | `true` | auto-resume after thinking block recovery |
153155
| `tokenRefreshSkewMs` | `60000` | refresh tokens this many ms before expiry |
@@ -171,6 +173,7 @@ by default the plugin is strict (`unsupportedCodexPolicy: "strict"`). it returns
171173
set `unsupportedCodexPolicy: "fallback"` to enable model fallback after account/workspace attempts are exhausted.
172174

173175
defaults when fallback policy is enabled and `unsupportedCodexFallbackChain` is empty:
176+
- `gpt-5.4-pro -> gpt-5.4` (if `gpt-5.4-pro` is selected manually)
174177
- `gpt-5.3-codex -> gpt-5-codex -> gpt-5.2-codex`
175178
- `gpt-5.3-codex-spark -> gpt-5-codex -> gpt-5.3-codex -> gpt-5.2-codex` (applies if you manually select Spark model IDs)
176179
- `gpt-5.2-codex -> gpt-5-codex`
@@ -184,6 +187,7 @@ custom chain example:
184187
"unsupportedCodexPolicy": "fallback",
185188
"fallbackOnUnsupportedCodexModel": true,
186189
"unsupportedCodexFallbackChain": {
190+
"gpt-5.4-pro": ["gpt-5.4"],
187191
"gpt-5-codex": ["gpt-5.2-codex"],
188192
"gpt-5.3-codex": ["gpt-5-codex", "gpt-5.2-codex"],
189193
"gpt-5.3-codex-spark": ["gpt-5-codex", "gpt-5.3-codex", "gpt-5.2-codex"]
@@ -264,12 +268,12 @@ different settings for different models:
264268
"store": false
265269
},
266270
"models": {
267-
"gpt-5.2-fast": {
268-
"name": "fast gpt-5.2",
271+
"gpt-5.4-fast": {
272+
"name": "fast gpt-5.4",
269273
"options": { "reasoningEffort": "low" }
270274
},
271-
"gpt-5.2-smart": {
272-
"name": "smart gpt-5.2",
275+
"gpt-5.4-smart": {
276+
"name": "smart gpt-5.4",
273277
"options": { "reasoningEffort": "high" }
274278
}
275279
}
@@ -338,12 +342,12 @@ opencode
338342
### verify model resolution
339343

340344
```bash
341-
DEBUG_CODEX_PLUGIN=1 opencode run "test" --model=openai/gpt-5.2
345+
DEBUG_CODEX_PLUGIN=1 opencode run "test" --model=openai/gpt-5.4
342346
```
343347

344348
look for:
345-
```
346-
[openai-codex-plugin] Model config lookup: "gpt-5.2" → normalized to "gpt-5.2" for API {
349+
```text
350+
[openai-codex-plugin] Model config lookup: "gpt-5.4" → normalized to "gpt-5.4" for API {
347351
hasModelSpecificConfig: true,
348352
resolvedConfig: { ... }
349353
}
@@ -353,12 +357,12 @@ look for:
353357

354358
```bash
355359
# modern opencode (variants)
356-
ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode run "test" --model=openai/gpt-5.2 --variant=low
357-
ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode run "test" --model=openai/gpt-5.2 --variant=high
360+
ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode run "test" --model=openai/gpt-5.4 --variant=low
361+
ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode run "test" --model=openai/gpt-5.4 --variant=high
358362

359363
# legacy presets (model names include the effort)
360-
ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode run "test" --model=openai/gpt-5.2-low
361-
ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode run "test" --model=openai/gpt-5.2-high
364+
ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode run "test" --model=openai/gpt-5.4-low
365+
ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode run "test" --model=openai/gpt-5.4-high
362366

363367
# compare reasoning.effort in logs
364368
cat ~/.opencode/logs/codex-plugin/request-*-after-transform.json | jq '.reasoning.effort'

0 commit comments

Comments
 (0)