Skip to content

Commit 0c55e95

Browse files
Merge pull request #13 from PredictabilityAtScale/codex/update-schema-documentation-in-readme-and-docs
docs: expose response schema, cache and provider_options; add vendor schema gap analysis
2 parents 82d4861 + 3fbb9cf commit 0c55e95

6 files changed

Lines changed: 145 additions & 10 deletions

File tree

README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -561,8 +561,10 @@ Prompt files use YAML front matter with these fields:
561561
| `fallback_models` | `string[]` | Fallback model list |
562562
| `reasoning` | `object` | `{ effort, budget_tokens }` |
563563
| `sampling` | `object` | `{ temperature, top_p, frequency_penalty, presence_penalty, stop, max_output_tokens }` |
564-
| `response` | `object` | `{ format, stream }` |
564+
| `response` | `object` | `{ format, stream, schema, schema_name, schema_strict }` |
565+
| `cache` | `object` | Provider-specific cache controls (`openai`, `anthropic`, `gemini`/`google`) |
565566
| `tools` | `array` | Tool references (string names or inline definitions) |
567+
| `provider_options` | `object` | Provider-specific non-portable options (`anthropic`, `gemini`) |
566568
| `mcp` | `object` | MCP server references |
567569
| `context` | `object` | `{ inputs, history }` — declare expected variables, with optional per-input `max_size`, `trim`, structured or literal `allow_regex`/`deny_regex`, and built-in `non_empty` / `reject_secrets` validators |
568570
| `includes` | `string[]` | Paths to included prompt files |

SKILL.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ description: Guidance for creating and editing promptopskit prompt files, defaul
88
This project uses **promptopskit** to manage LLM prompts as code.
99
Prompts live in markdown files with YAML front matter, are validated against
1010
a schema, and render into provider-specific request bodies (OpenAI, Anthropic,
11-
Gemini, OpenRouter). Follow these instructions when creating or editing prompts.
11+
Gemini, OpenRouter, and OpenAI Responses). Follow these instructions when creating or editing prompts.
1212

1313
---
1414

@@ -58,13 +58,15 @@ the fields required by that specific file:
5858
| `id` | string | **yes** | Unique identifier for the prompt |
5959
| `schema_version` | number | yes | Always `1` |
6060
| `description` | string | no | Human-readable description |
61-
| `provider` | enum | no | `openai`, `anthropic`, `google`, `gemini`, `openrouter`, or `any` |
61+
| `provider` | enum | no | `openai`, `openai-responses`, `anthropic`, `google`, `gemini`, `openrouter`, or `any` |
6262
| `model` | string | no | Model identifier (e.g. `gpt-5.4`, `claude-sonnet-4-20250514`) |
6363
| `fallback_models` | string[] | no | Ordered fallback model list |
6464
| `reasoning` | object | no | `{ effort: low|medium|high, budget_tokens: number }` |
6565
| `sampling` | object | no | `{ temperature, top_p, frequency_penalty, presence_penalty, stop, max_output_tokens }` |
66-
| `response` | object | no | `{ format: text|json|markdown, stream: boolean }` |
66+
| `response` | object | no | `{ format: text|json|markdown, stream: boolean, schema?: object, schema_name?: string, schema_strict?: boolean }` |
67+
| `cache` | object | no | Provider-specific cache controls (`openai`, `anthropic`, `gemini`/`google`) |
6768
| `tools` | array | no | Tool names (strings) or inline definitions with `{ name, description, input_schema }` |
69+
| `provider_options` | object | no | Provider-specific advanced options (`anthropic`, `gemini`) |
6870
| `mcp` | object | no | `{ servers: [string | { name, config }] }` |
6971
| `context.inputs` | `Array<string | { name, max_size?, trim?, allow_regex?, deny_regex?, non_empty?, reject_secrets? }>` | no | Declared variable names used in templates, with optional size budgets and runtime hardening controls |
7072
| `context.history` | object | no | `{ max_items: number }` |
@@ -182,6 +184,7 @@ prompts/
182184
Supported default fields:
183185
- `provider` (front matter) — default provider for the folder
184186
- `model` (front matter) — default model for the folder
187+
- `cache` (front matter) — default provider-specific cache hints
185188
- `metadata` (front matter) — merged with prompt-local metadata
186189
- `# System instructions` (body section) — used when the prompt has none
187190

@@ -229,7 +232,7 @@ tiers:
229232
```
230233

231234
Overridable fields: `model`, `fallback_models`, `reasoning`, `sampling`,
232-
`response`, `tools`.
235+
`response`, `cache`, `tools`, `provider_options`.
233236

234237
Override application order: **base → environment → tier → runtime**.
235238

docs/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ Open-source developer toolkit for managing prompts, system instructions, tools,
1717
- [CLI](./cli.md) — Command-line interface: init, validate, compile, render, inspect, skill
1818
- [API Reference](./api-reference.md) — TypeScript API: `createPromptOpsKit`, `renderPrompt`, standalone functions
1919
- [Schema](./schema.md) — Full YAML front matter schema reference
20+
- [Vendor Schema Gap Analysis](./vendor-schema-gap-analysis.md) — Snapshot comparison against published OpenAI, Anthropic, Gemini, and OpenRouter schema capabilities
2021
- [Testing](./testing.md) — Test helpers, mock assets, and sidecar test files
2122
- [Validation](./validation.md) — Schema validation, "did you mean?" suggestions, variable checks, and early regex validation
2223

docs/overrides.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,10 +51,12 @@ Only these fields can be overridden in `environments` and `tiers`:
5151
| `fallback_models` | `string[]` | Fallback model list |
5252
| `reasoning` | `object` | `{ effort, budget_tokens }` |
5353
| `sampling` | `object` | `{ temperature, top_p, frequency_penalty, presence_penalty, stop, max_output_tokens }` |
54-
| `response` | `object` | `{ format, stream }` |
54+
| `response` | `object` | `{ format, stream, schema, schema_name, schema_strict }` |
55+
| `cache` | `object` | Provider-specific cache controls (`openai`, `anthropic`, `gemini`/`google`) |
5556
| `tools` | `array` | Tool references |
57+
| `provider_options` | `object` | Provider-specific advanced options (`anthropic`, `gemini`) |
5658

57-
Object fields (`reasoning`, `sampling`, `response`) are shallow-merged — individual sub-fields are replaced, but you don't need to repeat every sub-field.
59+
Object fields (`reasoning`, `sampling`, `response`, `cache`, `provider_options`) are shallow-merged — individual sub-fields are replaced, but you don't need to repeat every sub-field.
5860

5961
## Applying overrides
6062

docs/schema.md

Lines changed: 46 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,14 +9,15 @@ Prompt files use YAML front matter. This page documents every supported field.
99
| `id` | `string` | Yes | Unique prompt identifier (e.g. `support/reply`) |
1010
| `schema_version` | `number` | Yes | Schema version — currently `1` |
1111
| `description` | `string` | No | Human-readable description of the prompt |
12-
| `provider` | `string` | No | `openai`, `anthropic`, `gemini`, `google`, `openrouter`, `any` |
12+
| `provider` | `string` | No | `openai`, `openai-responses`, `anthropic`, `gemini`, `google`, `openrouter`, `any` |
1313
| `model` | `string` | No | Model name (e.g. `gpt-5.4`, `claude-sonnet-4-20250514`) |
1414
| `fallback_models` | `string[]` | No | Ordered list of fallback models |
1515
| `reasoning` | `object` | No | Reasoning/thinking configuration |
1616
| `sampling` | `object` | No | Sampling parameters |
1717
| `response` | `object` | No | Response format and streaming |
1818
| `cache` | `object` | No | Provider-specific prompt/context caching options |
1919
| `tools` | `array` | No | Tool references (strings or inline definitions) |
20+
| `provider_options` | `object` | No | Provider-specific advanced options (`anthropic`, `gemini`) |
2021
| `mcp` | `object` | No | MCP server references |
2122
| `context` | `object` | No | Declare expected variables and history settings |
2223
| `includes` | `string[]` | No | Paths to included prompt files (relative to this file) |
@@ -30,7 +31,7 @@ Prompt files use YAML front matter. This page documents every supported field.
3031

3132
| Field | Type | Description |
3233
|-------|------|-------------|
33-
| `provider` | `enum` | Default provider (`openai`, `anthropic`, `google`, `gemini`, `openrouter`, `any`) |
34+
| `provider` | `enum` | Default provider (`openai`, `openai-responses`, `anthropic`, `google`, `gemini`, `openrouter`, `any`) |
3435
| `model` | `string` | Default model identifier |
3536
| `cache` | `object` | Same as prompt-level `cache` block |
3637
| `metadata` | `object` | Same as the prompt `metadata` block (`owner`, `tags`, `review_required`, `stable`) |
@@ -85,12 +86,54 @@ sampling:
8586
response:
8687
format: json # text | json | markdown
8788
stream: true
89+
schema:
90+
type: object
91+
properties:
92+
answer:
93+
type: string
94+
schema_name: support_reply
95+
schema_strict: true
8896
```
8997

9098
| Field | Type | Description |
9199
|-------|------|-------------|
92100
| `format` | `'text' \| 'json' \| 'markdown'` | Response format |
93101
| `stream` | `boolean` | Enable streaming |
102+
| `schema` | `object` | Portable JSON schema object for structured output |
103+
| `schema_name` | `string` | Optional schema name (used by OpenAI/OpenAI Responses) |
104+
| `schema_strict` | `boolean` | Strict schema enforcement toggle (OpenAI/OpenAI Responses) |
105+
106+
## `provider_options`
107+
108+
Provider-specific options that are intentionally non-portable:
109+
110+
```yaml
111+
provider_options:
112+
anthropic:
113+
top_k: 40
114+
tool_choice:
115+
type: auto
116+
gemini:
117+
candidate_count: 1
118+
top_k: 32
119+
seed: 42
120+
response_schema:
121+
type: object
122+
response_modalities:
123+
- TEXT
124+
thinking_budget_tokens: 1024
125+
```
126+
127+
| Field | Type | Description |
128+
|-------|------|-------------|
129+
| `anthropic.top_k` | `number` | Anthropic `top_k` sampling control (`>= 0`) |
130+
| `anthropic.tool_choice` | `object` | Anthropic tool choice object |
131+
| `gemini.candidate_count` | `number` | Gemini candidate count (`> 0`) |
132+
| `gemini.top_k` | `number` | Gemini top-k sampling control (`>= 0`) |
133+
| `gemini.seed` | `number` | Gemini generation seed |
134+
| `gemini.response_schema` | `object` | Gemini-native response schema |
135+
| `gemini.response_modalities` | `string[]` | Gemini response modalities |
136+
| `gemini.thinking_budget_tokens` | `number` | Gemini thinking budget (`> 0`) |
94137

95138
## `tools`
96139

@@ -223,7 +266,7 @@ tiers:
223266
model: gpt-5.4
224267
```
225268

226-
Each environment/tier key maps to an overrides object. Overridable fields: `model`, `fallback_models`, `reasoning`, `sampling`, `response`, `cache`, `tools`. See [Overrides](./overrides.md).
269+
Each environment/tier key maps to an overrides object. Overridable fields: `model`, `fallback_models`, `reasoning`, `sampling`, `response`, `cache`, `tools`, `provider_options`. See [Overrides](./overrides.md).
227270

228271
## `metadata`
229272

docs/vendor-schema-gap-analysis.md

Lines changed: 84 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,84 @@
1+
# Vendor Schema Gap Analysis (as of April 25, 2026)
2+
3+
This page compares PromptOpsKit's prompt front-matter schema with currently published vendor API schema capabilities.
4+
5+
Primary references:
6+
7+
- OpenAI Responses API + structured outputs + prompt caching:
8+
- https://platform.openai.com/docs/api-reference/responses/create
9+
- https://platform.openai.com/docs/api-reference/chat/create
10+
- https://platform.openai.com/docs/guides/structured-outputs
11+
- https://platform.openai.com/docs/guides/prompt-caching
12+
- Anthropic Messages API + prompt caching:
13+
- https://platform.claude.com/docs/en/api/messages
14+
- https://platform.claude.com/docs/en/build-with-claude/prompt-caching
15+
- Gemini API generation + structured output + caching:
16+
- https://ai.google.dev/api/generate-content
17+
- https://ai.google.dev/gemini-api/docs/structured-output
18+
- https://ai.google.dev/api/caching
19+
- OpenRouter structured outputs + caching:
20+
- https://openrouter.ai/docs/features/structured-outputs
21+
- https://openrouter.ai/docs/features/prompt-caching
22+
23+
## Snapshot of current PromptOpsKit schema surface
24+
25+
PromptOpsKit currently models:
26+
27+
- Portable prompt settings (`reasoning`, `sampling`, `response`, `tools`, `context`).
28+
- Provider-specific options in `provider_options` (`anthropic`, `gemini`).
29+
- Provider-specific cache controls in `cache` (`openai`, `anthropic`, `gemini` / `google`).
30+
31+
See [`docs/schema.md`](./schema.md) and [`src/schema/schema.ts`](../src/schema/schema.ts).
32+
33+
## Gap analysis
34+
35+
### OpenAI
36+
37+
| Area | Vendor capability | PromptOpsKit status | Gap |
38+
|---|---|---|---|
39+
| Structured outputs | `response_format: { type: "json_schema", json_schema: { name, schema, strict, description? } }` | Supported via `response.schema`, `response.schema_name`, `response.schema_strict` | **Partial**: PromptOpsKit does not expose schema-level `description`. |
40+
| Chat vs Responses schema parity | OpenAI publishes both Chat Completions and Responses request shapes | PromptOpsKit has dedicated adapters for both (`openai` + `openai-responses`) with shared portable `response.schema*` mapping | **Partial**: API-specific fields are intentionally not fully modeled in front matter. |
41+
| Responses conversation threading checks | Responses supports `conversation` and `previous_response_id` threading fields | PromptOpsKit exposes both via runtime `openaiResponses` options and validates they are mutually exclusive | **Partial**: validation is runtime adapter logic, not a front-matter schema construct. |
42+
| Prompt caching | `prompt_cache_key`, `prompt_cache_retention` (`in_memory` / `24h`) | Supported via `cache.openai.prompt_cache_key`, `cache.openai.retention` | No significant gap. |
43+
| Streaming | `stream` in request body | Supported via `response.stream` for OpenAI adapters | No significant gap. |
44+
45+
### Anthropic
46+
47+
| Area | Vendor capability | PromptOpsKit status | Gap |
48+
|---|---|---|---|
49+
| Prompt caching | Top-level automatic caching + explicit block `cache_control` with `type`/`ttl` | Supported via `cache.anthropic.mode`, `type`, `ttl`, and explicit block toggles | **Operational note**: 1h cache behavior may require vendor beta/version headers controlled by caller. |
50+
| Tool choice / sampling extras | `tool_choice`, `top_k` | Supported via `provider_options.anthropic` | No significant gap. |
51+
| Structured outputs | Anthropic now documents structured outputs capabilities | PromptOpsKit currently warns that `response.schema` is ignored for Anthropic | **Gap**: no first-class Anthropic structured-output mapping yet. |
52+
53+
### Gemini (Google)
54+
55+
| Area | Vendor capability | PromptOpsKit status | Gap |
56+
|---|---|---|---|
57+
| Structured outputs | `generationConfig.responseSchema` and JSON-schema alternatives | Supported via `response.schema` and `provider_options.gemini.response_schema` | **Partial**: PromptOpsKit does not expose a dedicated `response_json_schema` field for Gemini's JSON-schema-specific alternative. |
58+
| Streaming | Endpoint-based streaming (`streamGenerateContent`) with same request schema | PromptOpsKit warns and ignores `response.stream` for Gemini adapter body | **Gap**: no endpoint-switch abstraction based on `response.stream`. |
59+
| Caching | Managed cached resources (`cachedContents`) and request reuse via `cachedContent` | Supported reuse only via `cache.gemini.cached_content` / `cache.google.cached_content` | **Gap**: no schema surface for cache-resource lifecycle (create/list/delete) inputs; only reference by id/name. |
60+
61+
### OpenRouter
62+
63+
| Area | Vendor capability | PromptOpsKit status | Gap |
64+
|---|---|---|---|
65+
| Structured outputs | `response_format` with `json_schema` on compatible models | Supported through OpenAI-compatible adapter path (`response.schema*`) | No major schema gap for common usage. |
66+
| Prompt caching | Provider-dependent + explicit/automatic forms (including Anthropic-style `cache_control`) | Partially supported through existing `cache` fields | **Partial**: OpenRouter-specific knobs/headers are not explicitly modeled as first-class fields. |
67+
| Response-healing / plugins | Optional provider features outside base chat schema | Not modeled in core schema | Out of scope by design (currently). |
68+
69+
## Recommended next schema additions
70+
71+
If we want closer parity with currently published vendor features while preserving portability:
72+
73+
1. **Add optional `response.schema_description`** for OpenAI/OpenRouter structured outputs.
74+
2. **Optionally formalize runtime OpenAI Responses checks** in docs/types (for `conversation` vs `previous_response_id`) as a "runtime schema" contract.
75+
3. **Add Anthropic structured-output support path** (likely in `provider_options.anthropic` first, then portable mapping).
76+
4. **Add Gemini JSON-schema alternative field** (e.g., `provider_options.gemini.response_json_schema`).
77+
5. **Add optional OpenRouter vendor block** under `provider_options.openrouter` for feature flags/headers that are not portable.
78+
6. **Document runtime responsibility for vendor headers** (for capabilities gated by beta/version headers, especially Anthropic caching modes).
79+
80+
## Scope and methodology
81+
82+
- This analysis focuses on **published request-schema capabilities** that affect prompt front matter and adapter request shaping.
83+
- It intentionally excludes pricing, policy, and model-availability differences except where they change request schema behavior.
84+
- Where docs are provider-specific and evolving quickly, treat this page as a dated snapshot and re-verify against vendor docs before implementing changes.

0 commit comments

Comments
 (0)