Skip to content

Latest commit

 

History

History
44 lines (32 loc) · 3.1 KB

File metadata and controls

44 lines (32 loc) · 3.1 KB

Documentation

Welcome to the OpenCode OpenAI Codex Auth Plugin documentation.

For Users

For Developers

Explore the engineering depth behind this plugin:

Key Architectural Decisions

This plugin bridges OpenCode and the ChatGPT Codex backend with explicit mode controls:

  1. Request Transform Mode Split - native mode (default) preserves OpenCode payload shape; legacy mode applies Codex compatibility rewrites.
  2. Stateless Operation - ChatGPT backend requires store: false, verified via testing.
  3. Full Context Preservation - Sends complete message history and always includes reasoning.encrypted_content.
  4. Stale-While-Revalidate Caching - Keeps prompt/instruction fetches fast while avoiding GitHub rate limits; optional startup prewarm for first-turn latency.
  5. Per-Model Configuration - Enables quality presets with quick switching.
  6. Fast Session Mode - Optional low-latency tuning (clamps reasoning/verbosity on trivial turns) without changing defaults.
  7. Entitlement-Aware Fallback Flow - Unsupported models try remaining accounts/workspaces first, then optional fallback chain if enabled.
  8. Beginner Operations Layer - Setup checklist/wizard, guided doctor flow, next-step recommender, and startup preflight summaries.
  9. Safety-First Account Backup Flow - Timestamped exports, import dry-run previews, and pre-import snapshots before apply when existing accounts are present.

Testing: 1,700+ tests plus integration coverage.


Quick Links: GitHub | npm | Issues