You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add AI SDK telemetry integration (`createEvlogIntegration`), cost estimation, and enriched embedding capture. `createEvlogIntegration()` implements the AI SDK's `TelemetryIntegration` interface to capture per-tool execution timing/success/errors and total generation wall time. Cost estimation computes `ai.estimatedCost` from a user-provided pricing map. `captureEmbed` now accepts model ID, dimensions, and batch count for richer embedding observability.
Copy file name to clipboardExpand all lines: apps/docs/content/2.logging/5.ai-sdk.md
+116-6Lines changed: 116 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,19 +16,21 @@ links:
16
16
variant: subtle
17
17
---
18
18
19
-
`evlog/ai` gives you full AI observability by wrapping your model with middleware. Token usage, tool calls, streaming performance, cache hits, reasoning tokens, all captured into the wide event automatically.
19
+
`evlog/ai` gives you full AI observability by wrapping your model with middleware and an optional telemetry integration. Token usage, tool calls, tool execution timing, streaming performance, cache hits, reasoning tokens, cost estimation — all captured into the wide event automatically.
20
20
21
21
::code-collapse
22
22
23
23
```txt [Prompt]
24
24
Add AI observability to my app with evlog.
25
25
26
26
- Install the AI SDK: pnpm add ai
27
-
- Import createAILogger from 'evlog/ai'
27
+
- Import createAILogger and createEvlogIntegration from 'evlog/ai'
28
28
- Create an AI logger with createAILogger(log) where log is your request logger
29
29
- Wrap your model with ai.wrap('anthropic/claude-sonnet-4.6') and pass it to generateText, streamText, etc.
30
30
- Token usage, tool calls, streaming metrics, and errors are captured automatically into the wide event
31
-
- For embedding calls, use ai.captureEmbed({ usage }) after embed() or embedMany()
31
+
- For deeper observability (tool execution timing, total generation wall time), add createEvlogIntegration(ai) to experimental_telemetry.integrations
32
+
- For embedding calls, use ai.captureEmbed({ usage, model, dimensions, count }) after embed() or embedMany()
- Works with all frameworks: Nuxt, Express, Hono, Fastify, NestJS, Elysia, standalone
33
35
34
36
Docs: https://www.evlog.dev/logging/ai-sdk
@@ -117,8 +119,8 @@ Your wide event now includes:
117
119
118
120
| Method | Description |
119
121
|--------|-------------|
120
-
|`wrap(model)`| Wraps a language model with middleware. Accepts a model string (e.g. `'anthropic/claude-sonnet-4.6'`) or a `LanguageModelV3` object. Works with `generateText`, `streamText`, `generateObject`, `streamObject`, and `ToolLoopAgent`. Also works with pre-wrapped models (e.g. from supermemory). |
121
-
|`captureEmbed(result)`| Manually captures token usage from `embed()` or `embedMany()` results (embedding models use a different type). |
122
+
|`wrap(model)`| Wraps a language model with middleware. Accepts a model string (e.g. `'anthropic/claude-sonnet-4.6'`) or a `LanguageModelV3` object. Works with `generateText`, `streamText`, and `ToolLoopAgent`. Also works with pre-wrapped models (e.g. from supermemory). |
123
+
|`captureEmbed(result)`| Manually captures token usage, model info, and dimensions from `embed()` or `embedMany()` results (embedding models use a different type). |
122
124
123
125
The middleware intercepts calls at the provider level. It does not touch your callbacks, prompts, or responses. Captured data flows through the normal evlog pipeline (sampling, enrichers, drains) and ends up in Axiom, Better Stack, or wherever you drain to.
124
126
@@ -127,6 +129,7 @@ The middleware intercepts calls at the provider level. It does not touch your ca
127
129
| Option | Type | Default | Description |
128
130
|--------|------|---------|-------------|
129
131
|`toolInputs`|`boolean \| ToolInputsOptions`|`false`| When enabled, `toolCalls` contains `{ name, input }` objects instead of plain strings. Opt-in because inputs can be large and may contain sensitive data. |
132
+
|`cost`|`Record<string, ModelCost>`|`undefined`| Pricing map for cost estimation. Keys are model IDs, values are `{ input, output }` in dollars per 1M tokens. |
130
133
131
134
Pass `true` to capture all inputs as-is, or an options object for fine-grained control:
132
135
@@ -152,6 +155,14 @@ const ai = createAILogger(log, {
Wrap each model separately, they share the same accumulator. When multiple models are used, the wide event includes both `model` (last model) and `models` (all unique models):
@@ -335,6 +360,87 @@ import { anthropic } from '@ai-sdk/anthropic'
335
360
const model =ai.wrap(anthropic('claude-sonnet-4.6'))
336
361
```
337
362
363
+
## Telemetry Integration
364
+
365
+
For deeper observability — tool execution timing, success/failure tracking, and total generation wall time — use `createEvlogIntegration()`. It implements the AI SDK's `TelemetryIntegration` interface and captures data that middleware alone cannot see.
366
+
367
+
### Combined with middleware (recommended)
368
+
369
+
When passed an `AILogger`, the integration shares its accumulator. Both paths write to the same `ai.*` field:
If your model is already wrapped (e.g. by another middleware), pass the request logger directly:
420
+
421
+
```typescript [server/api/chat.post.ts]
422
+
import { createEvlogIntegration } from'evlog/ai'
423
+
424
+
const integration =createEvlogIntegration(log)
425
+
426
+
const result =awaitgenerateText({
427
+
model: somePreWrappedModel,
428
+
experimental_telemetry: {
429
+
isEnabled: true,
430
+
integrations: [integration],
431
+
},
432
+
})
433
+
```
434
+
435
+
### What the integration captures
436
+
437
+
| Data | Source | Description |
438
+
|------|--------|-------------|
439
+
|`ai.tools[]`|`onToolCallFinish`| Per-tool `name`, `durationMs`, `success`, and `error` (if failed) |
440
+
|`ai.totalDurationMs`|`onStart` → `onFinish`| Total wall time from generation start to completion |
441
+
442
+
The middleware captures tokens, model info, and streaming metrics. The integration captures tool execution timing. Together, they give you complete AI observability.
443
+
338
444
## Captured Data
339
445
340
446
| Wide event field | Source | Description |
@@ -358,6 +464,10 @@ const model = ai.wrap(anthropic('claude-sonnet-4.6'))
358
464
|`ai.msToFinish`| Stream timing | Total stream duration (streaming only) |
359
465
|`ai.tokensPerSecond`| Computed | Output tokens per second (streaming only) |
360
466
|`ai.error`| Error capture | Error message if a model call fails |
@@ -866,7 +866,9 @@ Works in all frameworks: Nuxt (`evlog` config), Nitro (`evlog()` module options)
866
866
867
867
## AI SDK Integration
868
868
869
-
Capture token usage, tool calls, model info, and streaming metrics from the Vercel AI SDK into wide events. Import from `evlog/ai`. Requires `ai >= 6.0.0` as a peer dependency.
869
+
Capture token usage, tool calls, model info, streaming metrics, tool execution timing, cost estimation, and embedding metadata from the Vercel AI SDK into wide events. Import from `evlog/ai`. Requires `ai >= 6.0.0` as a peer dependency.
870
+
871
+
### Basic setup (middleware)
870
872
871
873
```typescript
872
874
import { createAILogger } from'evlog/ai'
@@ -877,22 +879,62 @@ const ai = createAILogger(log)
877
879
const result =streamText({
878
880
model: ai.wrap('anthropic/claude-sonnet-4.6'), // accepts string or model object
879
881
messages,
880
-
onFinish: ({ text }) => {
881
-
// User callbacks remain free — no conflict
882
+
})
883
+
```
884
+
885
+
`ai.wrap()` uses model middleware to transparently capture all LLM calls. Works with `generateText`, `streamText`, and `ToolLoopAgent`.
886
+
887
+
### Telemetry integration (deeper observability)
888
+
889
+
For tool execution timing, success/failure tracking, and total generation wall time, add `createEvlogIntegration()`:
`ai.wrap()` uses model middleware to transparently capture all LLM calls. Works with `generateText`, `streamText`, `generateObject`, `streamObject`, and `ToolLoopAgent`.
907
+
This adds `ai.tools` (per-tool `{ name, durationMs, success, error? }`) and `ai.totalDurationMs` to the wide event.
0 commit comments