|
13 | 13 |
|
14 | 14 | An idiomatic Elixir SDK for embedding OpenAI's Codex agent in your workflows and applications. This SDK wraps the `codex-rs` executable, providing a complete, production-ready interface with streaming support and comprehensive event handling. |
15 | 15 |
|
| 16 | +## Documentation Menu |
| 17 | + |
| 18 | +- `README.md` - installation, quick start, and runtime boundaries |
| 19 | +- `guides/01-getting-started.md` - first threads, turns, and sessions |
| 20 | +- `guides/02-architecture.md` - transport layering and ownership boundaries |
| 21 | +- `guides/03-api-guide.md` - public modules and common call patterns |
| 22 | +- `guides/07-models-and-reasoning.md` - shared catalog projections and reasoning controls |
| 23 | +- `guides/08-configuration-defaults.md` - config precedence and default resolution |
| 24 | + |
16 | 25 | ## Features |
17 | 26 |
|
18 | 27 | - **End-to-End Codex Lifecycle**: Spawn, resume, and manage full Codex threads with rich turn instrumentation. |
@@ -115,9 +124,35 @@ Custom trust roots use `CODEX_CA_CERTIFICATE` first and `SSL_CERT_FILE` second. |
115 | 124 | ignored. The same PEM bundle is applied consistently to Codex CLI subprocesses, direct HTTP |
116 | 125 | clients, remote model fetches, MCP HTTP/OAuth, realtime websockets, and voice HTTP requests. |
117 | 126 |
|
118 | | -Default model: `Codex.Models.default_model/0` currently resolves to `gpt-5.4` with the bundled catalog unless `CODEX_MODEL`, `OPENAI_DEFAULT_MODEL`, `CODEX_MODEL_DEFAULT`, or a fresher ChatGPT `/models` cache overrides it. |
| 127 | +## Centralized Model Selection |
| 128 | + |
| 129 | +`codex_sdk` no longer owns the active model catalog, fallback rules, or default |
| 130 | +selection policy. That authority now lives in `cli_subprocess_core`. |
| 131 | + |
| 132 | +The authoritative path is: |
| 133 | + |
| 134 | +- `CliSubprocessCore.ModelRegistry.resolve/3` |
| 135 | +- `CliSubprocessCore.ModelRegistry.validate/2` |
| 136 | +- `CliSubprocessCore.ModelRegistry.default_model/2` |
| 137 | +- `CliSubprocessCore.ModelRegistry.build_arg_payload/3` |
| 138 | + |
| 139 | +`Codex.Options.new/1` resolves a shared `model_payload`, then projects the |
| 140 | +current `model` and `reasoning_effort` from that payload. `Codex.Models` is now |
| 141 | +a read-only projection of the shared core catalog. It no longer owns a separate |
| 142 | +catalog or a separate fallback/defaulting path. |
| 143 | + |
| 144 | +Operationally, that means: |
| 145 | + |
| 146 | +- explicit request wins first |
| 147 | +- environment override comes next |
| 148 | +- provider default and remote default are core-owned, not SDK-owned |
| 149 | +- missing provider, missing model, placeholder model input, and invalid |
| 150 | + reasoning effort all fail through the core error contract |
| 151 | +- CLI argument rendering only emits `--model` from a non-empty resolved value |
119 | 152 |
|
120 | | -The SDK always loads the bundled upstream model catalog from `priv/models.json`, which is synced from the vendored upstream catalog in `codex/codex-rs/core/models.json`. When ChatGPT auth tokens are available it can still refresh `/models` and cache the result, but the old `features.remote_models` flag is no longer required for current catalog/default behavior. See `guides/07-models-and-reasoning.md` for the current bundled snapshot and how to inspect the effective runtime list. |
| 153 | +Use `Codex.Models.default_model/0`, `Codex.Models.list_visible/1`, and |
| 154 | +`Codex.Models.default_reasoning_effort/1` as convenience readers over that |
| 155 | +shared contract. |
121 | 156 |
|
122 | 157 | See the [OpenAI Codex documentation](https://github.com/openai/codex) for more authentication options. |
123 | 158 |
|
|
0 commit comments