Skip to content

Commit bd79669

Browse files
committed
feat: add gpt-5.4.3 model + opencode compatibility updates
- add gpt-5.4/gpt-5.4-pro snapshot alias normalization and remap legacy gpt-5* aliases to gpt-5.4 - split gpt-5.4-pro into an isolated prompt family/cache key while preserving fallback behavior - update OpenCode templates/docs for 1M context on gpt-5.4* and expand regression coverage across model mapping/family tests
1 parent e75d6a5 commit bd79669

19 files changed

+225
-86
lines changed

CHANGELOG.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,22 @@ all notable changes to this project. dates are ISO format (YYYY-MM-DD).
2525
- **doctor safe-fix edge path**: `codex-doctor fix` now reports a clear non-crashing message when no eligible account is available for auto-switch.
2626
- **first-time import flow**: `codex-import` no longer fails with `No accounts to export` when storage is empty; pre-import backup is skipped cleanly in zero-account setups.
2727

28+
## [5.4.3] - 2026-03-06
29+
30+
### added
31+
32+
- **gpt-5.4 snapshot alias normalization**: added support for `gpt-5.4-2026-03-05*` and `gpt-5.4-pro-2026-03-05*` model IDs (including effort suffix variants).
33+
34+
### changed
35+
36+
- **legacy GPT-5 alias target updated**: `gpt-5`, `gpt-5-mini`, and `gpt-5-nano` now normalize to `gpt-5.4` as the default general family.
37+
- **gpt-5.4-pro family isolation**: prompt-family detection now keeps `gpt-5.4-pro` separate from `gpt-5.4` for independent prompt/cache handling while preserving fallback policy behavior (`gpt-5.4-pro -> gpt-5.4`).
38+
- **OpenCode 5.4 template limits updated**: shipped OpenCode config templates now set `gpt-5.4*` context to `1,000,000` (output remains `128,000`) and docs now include optional `model_context_window` / `model_auto_compact_token_limit` tuning guidance.
39+
40+
### fixed
41+
42+
- **5.4.3 regression/test coverage alignment**: expanded and corrected normalization, family-routing, and prompt-mapping tests for snapshot aliases, pro-family separation, and legacy alias behavior.
43+
2844
## [5.4.2] - 2026-03-05
2945

3046
### added

README.md

Lines changed: 11 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -132,8 +132,8 @@ opencode run "Hello" --model=openai/gpt-5.4 --variant=medium
132132

133133
| Model | Variants | Notes |
134134
|-------|----------|-------|
135-
| `gpt-5.4` | none, low, medium, high, xhigh | Latest GPT-5.4 with reasoning levels |
136-
| `gpt-5.4-pro` | low, medium, high, xhigh | Optional manual model for deeper reasoning; fallback default is `gpt-5.4-pro -> gpt-5.4` |
135+
| `gpt-5.4` | none, low, medium, high, xhigh | Latest GPT-5.4 with reasoning levels and 1,000,000 context window |
136+
| `gpt-5.4-pro` | low, medium, high, xhigh | Optional manual model for deeper reasoning; fallback default is `gpt-5.4-pro -> gpt-5.4` (also 1,000,000 context window) |
137137
| `gpt-5-codex` | low, medium, high | Canonical Codex model for code generation (default: high) |
138138
| `gpt-5.3-codex-spark` | low, medium, high, xhigh | Spark IDs are supported by the plugin, but access is entitlement-gated by account/workspace |
139139
| `gpt-5.1-codex-max` | low, medium, high, xhigh | Maximum context Codex |
@@ -143,6 +143,14 @@ opencode run "Hello" --model=openai/gpt-5.4 --variant=medium
143143

144144
Config templates intentionally omit Spark model IDs by default to reduce entitlement failures on unsupported accounts. Add Spark manually only if your workspace is entitled.
145145

146+
Legacy and snapshot aliases supported by the plugin:
147+
- `gpt-5`, `gpt-5-mini`, `gpt-5-nano` normalize to `gpt-5.4`
148+
- `gpt-5.4-2026-03-05` and `gpt-5.4-pro-2026-03-05` (including effort suffix variants) normalize to their stable families
149+
150+
If your OpenCode runtime supports global auto-compaction settings, set:
151+
- `model_context_window = 1000000`
152+
- `model_auto_compact_token_limit = 900000`
153+
146154
**Using variants:**
147155
```bash
148156
# Modern OpenCode (v1.0.210+)
@@ -173,7 +181,7 @@ Add this to your `~/.config/opencode/opencode.json`:
173181
"models": {
174182
"gpt-5.4": {
175183
"name": "GPT 5.4 (OAuth)",
176-
"limit": { "context": 272000, "output": 128000 },
184+
"limit": { "context": 1000000, "output": 128000 },
177185
"modalities": { "input": ["text", "image", "pdf"], "output": ["text"] },
178186
"variants": {
179187
"none": { "reasoningEffort": "none" },

config/README.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,11 @@ Both templates include:
3737
- GPT-5.4, GPT-5 Codex, GPT-5.1, GPT-5.1 Codex, GPT-5.1 Codex Max, GPT-5.1 Codex Mini
3838
- Reasoning variants per model family
3939
- `store: false` and `include: ["reasoning.encrypted_content"]`
40-
- Context metadata (272k context / 128k output)
40+
- Context metadata (`gpt-5.4*`: 1,000,000 context / 128,000 output; other shipped models: 272,000 / 128,000)
41+
42+
If your OpenCode runtime supports global compaction tuning, you can also set:
43+
- `model_context_window = 1000000`
44+
- `model_auto_compact_token_limit = 900000`
4145

4246
## Spark model note
4347

config/opencode-legacy.json

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818
"gpt-5.4-none": {
1919
"name": "GPT 5.4 None (OAuth)",
2020
"limit": {
21-
"context": 272000,
21+
"context": 1000000,
2222
"output": 128000
2323
},
2424
"modalities": {
@@ -43,7 +43,7 @@
4343
"gpt-5.4-low": {
4444
"name": "GPT 5.4 Low (OAuth)",
4545
"limit": {
46-
"context": 272000,
46+
"context": 1000000,
4747
"output": 128000
4848
},
4949
"modalities": {
@@ -68,7 +68,7 @@
6868
"gpt-5.4-medium": {
6969
"name": "GPT 5.4 Medium (OAuth)",
7070
"limit": {
71-
"context": 272000,
71+
"context": 1000000,
7272
"output": 128000
7373
},
7474
"modalities": {
@@ -93,7 +93,7 @@
9393
"gpt-5.4-high": {
9494
"name": "GPT 5.4 High (OAuth)",
9595
"limit": {
96-
"context": 272000,
96+
"context": 1000000,
9797
"output": 128000
9898
},
9999
"modalities": {
@@ -118,7 +118,7 @@
118118
"gpt-5.4-xhigh": {
119119
"name": "GPT 5.4 Extra High (OAuth)",
120120
"limit": {
121-
"context": 272000,
121+
"context": 1000000,
122122
"output": 128000
123123
},
124124
"modalities": {

config/opencode-modern.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818
"gpt-5.4": {
1919
"name": "GPT 5.4 (OAuth)",
2020
"limit": {
21-
"context": 272000,
21+
"context": 1000000,
2222
"output": 128000
2323
},
2424
"modalities": {

docs/configuration.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,17 @@ controls how much thinking the model does.
4646
| `gpt-5.1` | none, low, medium, high |
4747

4848
the shipped config templates include 21 presets and do not add optional IDs by default. add `gpt-5.4-pro` and/or `gpt-5.3-codex-spark` manually only for entitled workspaces.
49+
for context sizing, shipped templates use:
50+
- `gpt-5.4` and `gpt-5.4-pro`: `context=1000000`, `output=128000`
51+
- other shipped families: `context=272000`, `output=128000`
52+
53+
model normalization aliases:
54+
- legacy `gpt-5`, `gpt-5-mini`, `gpt-5-nano` map to `gpt-5.4`
55+
- snapshot ids `gpt-5.4-2026-03-05*` and `gpt-5.4-pro-2026-03-05*` map to stable `gpt-5.4` / `gpt-5.4-pro`
56+
57+
if your OpenCode runtime supports global compaction tuning, you can set:
58+
- `model_context_window = 1000000`
59+
- `model_auto_compact_token_limit = 900000`
4960

5061
what they mean:
5162
- `none` - no reasoning phase (base models only; auto-converts to `low` for codex/pro families, including `gpt-5-codex` and `gpt-5.4-pro`)

docs/development/TESTING.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -54,10 +54,10 @@ npm test
5454

5555
| User Selects | Plugin Receives | Normalizes To | Config Lookup | API Receives | Result |
5656
|--------------|-----------------|---------------|---------------|--------------|--------|
57-
| `openai/gpt-5` | `"gpt-5"` | `"gpt-5.1"` | `models["gpt-5"]` → undefined | `"gpt-5.1"` | ✅ Uses global options |
57+
| `openai/gpt-5` | `"gpt-5"` | `"gpt-5.4"` | `models["gpt-5"]` → undefined | `"gpt-5.4"` | ✅ Uses global options |
5858
| `openai/gpt-5-codex` | `"gpt-5-codex"` | `"gpt-5-codex"` | `models["gpt-5-codex"]` → undefined | `"gpt-5-codex"` | ✅ Uses global options |
59-
| `openai/gpt-5-mini` | `"gpt-5-mini"` | `"gpt-5.1"` | `models["gpt-5-mini"]` → undefined | `"gpt-5.1"` | ✅ Uses global options |
60-
| `openai/gpt-5-nano` | `"gpt-5-nano"` | `"gpt-5.1"` | `models["gpt-5-nano"]` → undefined | `"gpt-5.1"` | ✅ Uses global options |
59+
| `openai/gpt-5-mini` | `"gpt-5-mini"` | `"gpt-5.4"` | `models["gpt-5-mini"]` → undefined | `"gpt-5.4"` | ✅ Uses global options |
60+
| `openai/gpt-5-nano` | `"gpt-5-nano"` | `"gpt-5.4"` | `models["gpt-5-nano"]` → undefined | `"gpt-5.4"` | ✅ Uses global options |
6161

6262
**Expected Behavior:**
6363
- ✅ All models work with global options
@@ -274,9 +274,9 @@ API receives: "gpt-5.1" ✅
274274
```
275275
User selects: openai/my-gpt-5-variant
276276
Plugin receives: "my-gpt-5-variant"
277-
normalizeModel: "my-gpt-5-variant" → "gpt-5.1" ✅ (includes "gpt-5", not "codex")
277+
normalizeModel: "my-gpt-5-variant" → "gpt-5.4" ✅ (includes "gpt-5", not "codex")
278278
Config lookup: models["my-gpt-5-variant"] → Found ✅
279-
API receives: "gpt-5.1" ✅
279+
API receives: "gpt-5.4" ✅
280280
```
281281

282282
**Result:** ✅ Works (correct model selected)
@@ -617,9 +617,9 @@ normalizeModel("codex-mini-latest") // → "gpt-5.1-codex-mini" ✅
617617
normalizeModel("gpt-5.1-codex") // → "gpt-5-codex" ✅
618618
normalizeModel("gpt-5.1") // → "gpt-5.1" ✅
619619
normalizeModel("my-codex-model") // → "gpt-5-codex" ✅
620-
normalizeModel("gpt-5") // → "gpt-5.1" ✅
621-
normalizeModel("gpt-5-mini") // → "gpt-5.1" ✅
622-
normalizeModel("gpt-5-nano") // → "gpt-5.1" ✅
620+
normalizeModel("gpt-5") // → "gpt-5.4" ✅
621+
normalizeModel("gpt-5-mini") // → "gpt-5.4" ✅
622+
normalizeModel("gpt-5-nano") // → "gpt-5.4" ✅
623623
normalizeModel("GPT 5.4 Pro High") // → "gpt-5.4-pro" ✅
624624
normalizeModel(undefined) // → "gpt-5.1" ✅
625625
normalizeModel("random-model") // → "gpt-5.1" ✅ (fallback)
@@ -658,7 +658,7 @@ export function normalizeModel(model: string | undefined): string {
658658
return "gpt-5-codex";
659659
}
660660
if (normalized.includes("gpt-5") || normalized.includes("gpt 5")) {
661-
return "gpt-5.1";
661+
return "gpt-5.4";
662662
}
663663
return "gpt-5.1";
664664
}
@@ -667,7 +667,7 @@ export function normalizeModel(model: string | undefined): string {
667667
**Why this works:**
668668
- ✅ Case-insensitive (`.toLowerCase()` + `.includes()`)
669669
- ✅ Pattern-based (works with any naming)
670-
-Safe fallback (unknown models`gpt-5.1`)
670+
-Legacy GPT-5 fallback (`gpt-5*` aliases`gpt-5.4`) + safe unknown fallback (`gpt-5.1`)
671671
- ✅ Codex priority with explicit Codex Mini support (`codex-mini*``codex-mini-latest`)
672672

673673
---
@@ -723,17 +723,17 @@ opencode run "test" --model=openai/gpt-5-codex
723723
```typescript
724724
describe('normalizeModel', () => {
725725
test('handles all default models', () => {
726-
expect(normalizeModel('gpt-5')).toBe('gpt-5.1')
726+
expect(normalizeModel('gpt-5')).toBe('gpt-5.4')
727727
expect(normalizeModel('gpt-5-codex')).toBe('gpt-5-codex')
728728
expect(normalizeModel('gpt-5-codex-mini')).toBe('gpt-5.1-codex-mini')
729-
expect(normalizeModel('gpt-5-mini')).toBe('gpt-5.1')
730-
expect(normalizeModel('gpt-5-nano')).toBe('gpt-5.1')
729+
expect(normalizeModel('gpt-5-mini')).toBe('gpt-5.4')
730+
expect(normalizeModel('gpt-5-nano')).toBe('gpt-5.4')
731731
})
732732

733733
test('handles custom preset names', () => {
734734
expect(normalizeModel('gpt-5-codex-low')).toBe('gpt-5-codex')
735735
expect(normalizeModel('openai/gpt-5-codex-mini-high')).toBe('gpt-5.1-codex-mini')
736-
expect(normalizeModel('gpt-5-high')).toBe('gpt-5.1')
736+
expect(normalizeModel('gpt-5-high')).toBe('gpt-5.4')
737737
})
738738

739739
test('handles legacy names', () => {

docs/getting-started.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -206,16 +206,16 @@ Safe mode effects:
206206

207207
| Model | Variants | Notes |
208208
|-------|----------|-------|
209-
| `gpt-5.4` | none, low, medium, high, xhigh | Latest GPT-5.4 |
210-
| `gpt-5.4-pro` | low, medium, high, xhigh | Optional manual model for deeper reasoning; fallback default is `gpt-5.4-pro -> gpt-5.4` |
209+
| `gpt-5.4` | none, low, medium, high, xhigh | Latest GPT-5.4 (1,000,000 context) |
210+
| `gpt-5.4-pro` | low, medium, high, xhigh | Optional manual model for deeper reasoning; fallback default is `gpt-5.4-pro -> gpt-5.4` (1,000,000 context) |
211211
| `gpt-5-codex` | low, medium, high | Canonical Codex for code generation |
212212
| `gpt-5.3-codex-spark` | low, medium, high, xhigh | Optional manual model; entitlement-gated by account/workspace |
213213
| `gpt-5.1-codex-max` | low, medium, high, xhigh | Maximum context |
214214
| `gpt-5.1-codex` | low, medium, high | Standard Codex |
215215
| `gpt-5.1-codex-mini` | medium, high | Lightweight |
216216
| `gpt-5.1` | none, low, medium, high | Base model |
217217

218-
**Total: 21 template presets** with 272k context / 128k output (+ optional Spark IDs when entitled).
218+
**Total: 21 template presets** with mixed context sizing: `gpt-5.4*` at 1,000,000 / 128,000 and other shipped families at 272,000 / 128,000 (+ optional Spark IDs when entitled).
219219

220220
---
221221

lib/prompts/codex.ts

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,6 +49,7 @@ export type ModelFamily =
4949
| "codex-max"
5050
| "codex"
5151
| "gpt-5.4"
52+
| "gpt-5.4-pro"
5253
| "gpt-5.2"
5354
| "gpt-5.1";
5455

@@ -61,6 +62,7 @@ export const MODEL_FAMILIES: readonly ModelFamily[] = [
6162
"codex-max",
6263
"codex",
6364
"gpt-5.4",
65+
"gpt-5.4-pro",
6466
"gpt-5.2",
6567
"gpt-5.1",
6668
] as const;
@@ -75,6 +77,8 @@ const PROMPT_FILES: Record<ModelFamily, string> = {
7577
codex: "gpt_5_codex_prompt.md",
7678
// As of Codex rust-v0.111.0, GPT-5.4 uses the same prompt file family as GPT-5.2.
7779
"gpt-5.4": "gpt_5_2_prompt.md",
80+
// GPT-5.4-pro uses the same core prompt file as GPT-5.4, but keeps isolated cache/family state.
81+
"gpt-5.4-pro": "gpt_5_2_prompt.md",
7882
"gpt-5.2": "gpt_5_2_prompt.md",
7983
"gpt-5.1": "gpt_5_1_prompt.md",
8084
};
@@ -87,6 +91,7 @@ const CACHE_FILES: Record<ModelFamily, string> = {
8791
"codex-max": "codex-max-instructions.md",
8892
codex: "codex-instructions.md",
8993
"gpt-5.4": "gpt-5.4-instructions.md",
94+
"gpt-5.4-pro": "gpt-5.4-pro-instructions.md",
9095
"gpt-5.2": "gpt-5.2-instructions.md",
9196
"gpt-5.1": "gpt-5.1-instructions.md",
9297
};
@@ -120,6 +125,9 @@ export function getModelFamily(normalizedModel: string): ModelFamily {
120125
) {
121126
return "codex";
122127
}
128+
if (/\bgpt(?:-| )5\.4(?:-| )pro(?:\b|[- ])/i.test(normalizedModel)) {
129+
return "gpt-5.4-pro";
130+
}
123131
if (/\bgpt(?:-| )5\.4(?:\b|[- ])/i.test(normalizedModel)) {
124132
return "gpt-5.4";
125133
}
@@ -404,7 +412,7 @@ function refreshInstructionsInBackground(
404412
* Prewarm instruction caches for the provided models/families.
405413
*/
406414
export function prewarmCodexInstructions(models: string[] = []): void {
407-
const candidates = models.length > 0 ? models : ["gpt-5-codex", "gpt-5.4", "gpt-5.2", "gpt-5.1"];
415+
const candidates = models.length > 0 ? models : ["gpt-5-codex", "gpt-5.4", "gpt-5.4-pro", "gpt-5.2", "gpt-5.1"];
408416
for (const model of candidates) {
409417
void getCodexInstructions(model).catch((error) => {
410418
logDebug("Codex instruction prewarm failed", {

lib/request/helpers/model-map.ts

Lines changed: 16 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -67,6 +67,12 @@ export const MODEL_MAP: Record<string, string> = {
6767
"gpt-5.4-medium": "gpt-5.4",
6868
"gpt-5.4-high": "gpt-5.4",
6969
"gpt-5.4-xhigh": "gpt-5.4",
70+
"gpt-5.4-2026-03-05": "gpt-5.4",
71+
"gpt-5.4-2026-03-05-none": "gpt-5.4",
72+
"gpt-5.4-2026-03-05-low": "gpt-5.4",
73+
"gpt-5.4-2026-03-05-medium": "gpt-5.4",
74+
"gpt-5.4-2026-03-05-high": "gpt-5.4",
75+
"gpt-5.4-2026-03-05-xhigh": "gpt-5.4",
7076

7177
// ============================================================================
7278
// GPT-5.4 Pro Models (optional/manual config)
@@ -77,6 +83,12 @@ export const MODEL_MAP: Record<string, string> = {
7783
"gpt-5.4-pro-medium": "gpt-5.4-pro",
7884
"gpt-5.4-pro-high": "gpt-5.4-pro",
7985
"gpt-5.4-pro-xhigh": "gpt-5.4-pro",
86+
"gpt-5.4-pro-2026-03-05": "gpt-5.4-pro",
87+
"gpt-5.4-pro-2026-03-05-none": "gpt-5.4-pro",
88+
"gpt-5.4-pro-2026-03-05-low": "gpt-5.4-pro",
89+
"gpt-5.4-pro-2026-03-05-medium": "gpt-5.4-pro",
90+
"gpt-5.4-pro-2026-03-05-high": "gpt-5.4-pro",
91+
"gpt-5.4-pro-2026-03-05-xhigh": "gpt-5.4-pro",
8092

8193
// ============================================================================
8294
// GPT-5.2 Models (supports none/low/medium/high/xhigh per OpenAI API docs)
@@ -128,11 +140,11 @@ export const MODEL_MAP: Record<string, string> = {
128140
"gpt-5-codex-mini-high": "gpt-5.1-codex-mini",
129141

130142
// ============================================================================
131-
// GPT-5 General Purpose Models (LEGACY - maps to gpt-5.1 as gpt-5 is being phased out)
143+
// GPT-5 General Purpose Models (LEGACY - maps to gpt-5.4 latest)
132144
// ============================================================================
133-
"gpt-5": "gpt-5.1",
134-
"gpt-5-mini": "gpt-5.1",
135-
"gpt-5-nano": "gpt-5.1",
145+
"gpt-5": "gpt-5.4",
146+
"gpt-5-mini": "gpt-5.4",
147+
"gpt-5-nano": "gpt-5.4",
136148
};
137149

138150
/**

0 commit comments

Comments
 (0)