You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Direct adapter rendering also accepts `environment` and `tier` selectors. This is useful for compiled JSON/ESM assets in browser, edge, or worker code:
@@ -242,9 +261,9 @@ In browser or client-side code, keep provider credentials on the server. Use the
242
261
243
262
### Provider-specific fields and raw passthrough
244
263
245
-
Use normalized fields first (`sampling`, `response`, `cache`, `tools`) so prompts stay portable. `response.schema` is the neutral JSON Schema path; adapters emit it as OpenAI/OpenRouter `response_format`, OpenAI Responses `text.format`, Anthropic `output_config.format`, and Gemini `generationConfig.responseJsonSchema`.
264
+
Use normalized fields first (`sampling`, `response`, `cache`, `tools`) so prompts stay portable. `response.schema` is the neutral JSON Schema path; adapters emit it as OpenAI/OpenRouter/LLMAsAService`response_format`, OpenAI Responses `text.format`, Anthropic `output_config.format`, and Gemini `generationConfig.responseJsonSchema`.
246
265
247
-
Use `provider_options` when PromptOpsKit has a known provider-specific mapping, such as Anthropic `top_k`, Gemini's native `response_schema`, or OpenRouter routing fields.
266
+
Use `provider_options` when PromptOpsKit has a known provider-specific mapping, such as Anthropic `top_k`, Gemini's native `response_schema`, OpenRouter routing fields, or LLMAsAService gateway routing/customer metadata.
248
267
249
268
```yaml
250
269
response:
@@ -261,8 +280,16 @@ provider_options:
261
280
provider:
262
281
order: ["anthropic", "openai"]
263
282
transforms: ["middle-out"]
283
+
llmasaservice:
284
+
project_id: "llm-project-id"
285
+
# Optional default; usually pass the real customer at render time.
286
+
customer:
287
+
customer_id: "cust_123"
288
+
customer_name: "Acme"
264
289
```
265
290
291
+
For LLMAsAService, `provider_options.llmasaservice.customer` is intended to be render-time attribution for the current account/user. A prompt can keep a default, but production calls should normally override it through `runtime.provider_options.llmasaservice.customer`.
292
+
266
293
When a provider adds a body field PromptOpsKit does not model yet, use `raw`:
267
294
268
295
```yaml
@@ -278,6 +305,8 @@ raw:
278
305
openrouter:
279
306
usage:
280
307
include: true
308
+
llmasaservice:
309
+
conversationId: "conv_123"
281
310
```
282
311
283
312
Each adapter reads only its matching raw block and shallow-merges it into the generated request body after normalized mappings. This is intentionally an escape hatch; prefer first-class fields when they exist.
@@ -336,7 +365,7 @@ Use PromptOpsKit when you want:
336
365
337
366
## Optional UsageTap Tracking
338
367
339
-
PromptOpsKit can also help you track provider calls with UsageTap.com while keeping the core render API body-only.
368
+
PromptOpsKit can also help you track provider calls with UsageTap.com while keeping the core render API transport-light.
340
369
341
370
```typescript
342
371
import { createPromptOpsKit } from 'promptopskit';
-`entitlementMode` defaults to `'off'`. Set it to `'apply'` only when you want UsageTap allowances to mutate a cloned provider request.
403
-
-`runOpenRouterWithUsageTap`, `runAnthropicWithUsageTap`, and `runGeminiWithUsageTap` follow the same pattern.
432
+
-`runOpenRouterWithUsageTap`, `runLLMAsAServiceWithUsageTap`, `runAnthropicWithUsageTap`, and `runGeminiWithUsageTap` follow the same pattern.
404
433
-`extractOpenAIUsage`, `extractAnthropicUsage`, and `extractGeminiUsage` are public if you want to manage UsageTap lifecycle yourself.
405
434
406
435
For explicit lifecycle control, use `beginUsageTapCall`, `endUsageTapCall`, or `withUsageTapCall` from `promptopskit/usagetap`. Full documentation: [docs/usagetap.md](./docs/usagetap.md).
@@ -593,7 +622,7 @@ Renders a prompt for a specific provider. Returns `{ resolved, request?, returnM
593
622
|--------|------|-------------|
594
623
|`path`|`string`| Prompt path (no extension), e.g. `'support/reply'`|
595
624
|`source`|`string`| Inline prompt source (alternative to path) |
|`onContextOverflow`|`(info) => string`| Optional callback to transform oversized context values before rendering |
599
628
|`onHistoryCompaction`|`(info) => string \| { role, content }`| Optional callback to compact overflow history when `context.history.max_items` is exceeded |
@@ -622,16 +651,16 @@ Prompt files use YAML front matter with these fields:
|`context.inputs`| `Array<string | { name, max_size?, trim?, allow_regex?, deny_regex?, non_empty?, reject_secrets? }>` | no | Declared variable names used in templates, with optional size budgets and runtime hardening controls |
@@ -548,7 +594,7 @@ Prefer portable fields first:
548
594
- Use `cache` for provider cache hints
549
595
- Use `tools` for tool definitions
550
596
551
-
Treat `response.schema` as the provider-neutral JSON Schema contract. The adapters emit it through provider-specific request fields: OpenAI/OpenRouter `response_format`, OpenAI Responses `text.format`, Anthropic `output_config.format`, and Gemini `generationConfig.responseJsonSchema`.
597
+
Treat `response.schema` as the provider-neutral JSON Schema contract. The adapters emit it through provider-specific request fields: OpenAI/OpenRouter/LLMAsAService `response_format`, OpenAI Responses `text.format`, Anthropic `output_config.format`, and Gemini `generationConfig.responseJsonSchema`.
552
598
553
599
Use `provider_options` for known non-portable mappings:
554
600
@@ -573,8 +619,16 @@ provider_options:
573
619
provider:
574
620
order: ["anthropic", "openai"]
575
621
transforms: ["middle-out"]
622
+
llmasaservice:
623
+
project_id: llm-project-id
624
+
# Optional default; usually pass the real customer at render time.
625
+
customer:
626
+
customer_id: cust_123
627
+
customer_name: Acme
576
628
```
577
629
630
+
For LLMAsAService, prefer putting the current customer/account/user attribution in `runtime.provider_options.llmasaservice.customer` during rendering. Static prompt metadata may include a default, but runtime values should override it for real requests.
631
+
578
632
Use `raw` only when a vendor request-body field is important and PromptOpsKit does not model it yet:
579
633
580
634
```yaml
@@ -590,9 +644,11 @@ raw:
590
644
openrouter:
591
645
usage:
592
646
include: true
647
+
llmasaservice:
648
+
conversationId: conv_123
593
649
```
594
650
595
-
Raw blocks are provider-scoped (`openai`, `openai-responses`/`openai_responses`, `anthropic`, `gemini`/`google`, `openrouter`) and are shallow-merged into the final request body after normalized fields. When adding `raw`, include a short note in `# Notes` explaining why a first-class field is not being used.
651
+
Raw blocks are provider-scoped (`openai`, `openai-responses`/`openai_responses`, `anthropic`, `gemini`/`google`, `openrouter`, `llmasaservice`) and are shallow-merged into the final request body after normalized fields. When adding `raw`, include a short note in `# Notes` explaining why a first-class field is not being used.
Supported adapter names are `openai`, `openai-responses`, `anthropic`, `gemini`/`google`, `openrouter`, and `llmasaservice`.
254
+
253
255
`RuntimeRenderOptions` for direct adapter rendering supports `environment`, `tier`, `runtime`, `variables`, `onContextOverflow`, `history`, `onHistoryCompaction`, `toolRegistry`, `strict`, and `openaiResponses`.
254
256
255
257
Runtime overrides can include the same overridable front matter fields as `environments` and `tiers`, including `raw` provider passthrough blocks. Raw blocks are merged into provider request bodies after normalized fields and provider-specific options.
0 commit comments