You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|`gpt-5.1-codex-medium`| GPT 5.1 Codex Medium (OAuth) | Medium | Balanced code tasks |
348
397
|`gpt-5.1-codex-high`| GPT 5.1 Codex High (OAuth) | High | Complex code & tools |
349
-
|`gpt-5.1-codex-max`| GPT 5.1 Codex Max (OAuth) | High | Long-horizon builds, large refactors |
398
+
|`gpt-5.1-codex-max-low`| GPT 5.1 Codex Max Low (OAuth) | Low | Fast exploratory large-context work |
399
+
|`gpt-5.1-codex-max-medium`| GPT 5.1 Codex Max Medium (OAuth) | Medium | Balanced large-context builds |
400
+
|`gpt-5.1-codex-max-high`| GPT 5.1 Codex Max High (OAuth) | High | Long-horizon builds, large refactors |
350
401
|`gpt-5.1-codex-max-xhigh`| GPT 5.1 Codex Max Extra High (OAuth) | xHigh | Deep multi-hour agent loops, research/debug marathons |
351
402
|`gpt-5.1-codex-mini-medium`| GPT 5.1 Codex Mini Medium (OAuth) | Medium | Latest Codex mini tier |
352
403
|`gpt-5.1-codex-mini-high`| GPT 5.1 Codex Mini High (OAuth) | High | Codex Mini with maximum reasoning |
@@ -359,7 +410,7 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t
359
410
360
411
> **Note**: All `gpt-5.1-codex-mini*` presets map directly to the `gpt-5.1-codex-mini` slug with standard Codex limits (272k context / 128k output).
361
412
>
362
-
> **Note**: Codex Max uses the `gpt-5.1-codex-max` slug with 272k input and expanded ~400k output support plus `xhigh` reasoning.
413
+
> **Note**: Codex Max presets use the `gpt-5.1-codex-max` slug with 272k input and expanded ~400k output support. Use `gpt-5.1-codex-max-low/medium/high/xhigh`to pick reasoning level (only `-xhigh` uses `xhigh` reasoning).
363
414
364
415
> **⚠️ Important**: GPT 5 models can be temperamental - some variants may work better than others, some may give errors, and behavior may vary. Stick to the presets above configured in `full-opencode.json` for best results.
- ✅ 272k context + 128k output window for core presets (Codex Max expands output to ~400k)
@@ -209,7 +251,7 @@ Add this to `~/.config/opencode/opencode.json`:
209
251
210
252
> **Note**: All `gpt-5.1-codex-mini*` presets use 272k context / 128k output limits.
211
253
>
212
-
> **Note**: Codex Max presets map to `gpt-5.1-codex-max` with 272k input and expanded ~400k output plus `xhigh` reasoning.
254
+
> **Note**: Codex Max presets map to the `gpt-5.1-codex-max`slug with 272k input and expanded ~400k output. Use `gpt-5.1-codex-max-low/medium/high/xhigh`to pick the reasoning level (only `-xhigh` uses `xhigh` reasoning).
213
255
214
256
Prompt caching is enabled out of the box: when OpenCode sends its session identifier as `prompt_cache_key`, the plugin forwards it untouched so multi-turn runs reuse prior work. The CODEX_MODE bridge prompt bundled with the plugin is kept in sync with the latest Codex CLI release, so the OpenCode UI and Codex share the same tool contract. If you hit your ChatGPT subscription limits, the plugin returns a friendly Codex-style message with the 5-hour and weekly usage windows so you know when capacity resets.
0 commit comments