Skip to content

Commit e7980c2

Browse files
authored
Merge pull request #68 from LAA-Software-Engineering/feature/examples-example1-openai-cost-docs
Examples, docs, and OpenAI run cost estimation
2 parents 97c1e47 + 639c566 commit e7980c2

11 files changed

Lines changed: 275 additions & 26 deletions

File tree

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,3 +33,6 @@ go.work.sum
3333
# Editor/IDE
3434
# .idea/
3535
# .vscode/
36+
37+
tmp/
38+
examples/**/.agentic/

docs/EXAMPLES.md

Lines changed: 109 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
# Examples
22

3-
Short, runnable patterns for **`apiVersion: agentic.dev/v0`**. For the full YAML spec, CLI behaviour, and field semantics, see [**`design_doc.md`**](design_doc.md).
3+
Short, runnable patterns for **`apiVersion: agentic.dev/v0`**. For the full YAML spec, CLI behaviour, and field semantics, see [**`DESIGN_DOC.md`**](DESIGN_DOC.md).
4+
5+
A checked-in copy of the **OpenAI `support_snippet`** project from **section 4** lives under [**`examples/example1/`**](../examples/example1/). Its **`metadata.name`** is **`example1`**, matching that folder. From the repository root, pass **`--project examples/example1`** to **`agentctl`** (or **`cd` there** and use **`--project .`**).
46

57
---
68

@@ -20,7 +22,7 @@ my-agent-system/
2022
workflows/hello.yaml
2123
```
2224

23-
The generated files match the snippets in sections 2–4 below (with `metadata.name` set from the argument you pass to `init`).
25+
Sections **2–3** mirror what `init` creates. **Section 4** is a separate **`gpt-4o-mini`** project layout you can copy beside or instead of the scaffold.
2426

2527
---
2628

@@ -102,23 +104,42 @@ agentctl run workflow/hello --project my-agent-system
102104

103105
---
104106

105-
## 4. OpenAI chat (real model)
107+
## 4. Real OpenAI example (`gpt-4o-mini`)
108+
109+
This is a small but **end-to-end** project: a **native echo** step supplies fixed “policy” text, then **`gpt-4o-mini`** drafts a one-line customer reply. You need a valid **[OpenAI API key](https://platform.openai.com/api-keys)** and outbound **HTTPS** to `api.openai.com`.
110+
111+
**Repo copy:** [**`examples/example1/`**](../examples/example1/) — **`agentctl validate --project examples/example1`** from the repo root, or **`agentctl validate --project .`** after **`cd examples/example1`**.
112+
113+
The runtime calls OpenAI’s **`/v1/chat/completions`** endpoint. The agent **must** answer with a **single JSON object** (no markdown fences); the engine parses that object and exposes its fields to **`spec.output`**.
114+
115+
**`totalCostUsd` on runs** is accumulated from each step’s reported cost. Native tools report **0**. For **OpenAI**, the client estimates USD from the API **`usage`** token counts × approximate per-million rates for known models (**`gpt-4o-mini`**, **`gpt-4o`**, and dated variants such as **`gpt-4o-mini-…`**). Other model ids stay at **0** until their rates are added in code; see **`internal/models/openai_cost.go`** and verify against [OpenAI pricing](https://openai.com/api/pricing/).
116+
117+
### Layout
106118

107-
The control plane currently wires **`type: openai`** to the OpenAI **`/v1/chat/completions`** API. Set a key via **`apiKeyFrom`** (MVP: **`env:VAR`** only).
119+
```text
120+
example1/
121+
project.yaml
122+
policies/default.yaml
123+
tools/helper.yaml
124+
agents/support_writer.yaml
125+
workflows/support_snippet.yaml
126+
```
108127

109-
**`project.yaml`** (add an `openai` provider and point defaults at an OpenAI model id):
128+
Reuse **`policies/default.yaml`** and **`tools/helper.yaml`** from **section 3** unchanged.
129+
130+
### `project.yaml`
110131

111132
```yaml
112133
apiVersion: agentic.dev/v0
113134
kind: Project
114135
metadata:
115-
name: my-agent-system
136+
name: example1
116137
spec:
117138
imports:
118139
- ./policies/default.yaml
119140
- ./tools/helper.yaml
120-
- ./agents/assistant.yaml
121-
- ./workflows/chat.yaml
141+
- ./agents/support_writer.yaml
142+
- ./workflows/support_snippet.yaml
122143
defaults:
123144
policy: default
124145
model: openai/gpt-4o-mini
@@ -131,46 +152,112 @@ spec:
131152
apiKeyFrom: env:OPENAI_API_KEY
132153
```
133154

134-
```bash
135-
export OPENAI_API_KEY="sk-..." # required before validate/plan/apply/run
136-
```
155+
### `agents/support_writer.yaml`
137156

138-
**`agents/assistant.yaml`** — `metadata.name` is what workflow steps reference in **`agent:`**. The executor expects the model’s reply to be a **JSON object** (plain text, not fenced code blocks).
157+
`metadata.name` is the value you use in **`agent:`** on the workflow step.
139158

140159
```yaml
141160
apiVersion: agentic.dev/v0
142161
kind: Agent
143162
metadata:
144-
name: assistant
163+
name: support_writer
145164
spec:
146165
model: openai/gpt-4o-mini
147166
policy: default
167+
constraints:
168+
timeoutSeconds: 60
148169
instructions: |
149-
You are a concise assistant. Respond with a single JSON object only, no markdown.
150-
Shape: {"message": "<your reply>"}
170+
You draft short customer-facing email lines for a storefront.
171+
You receive JSON in the user message: product name and a return-policy line from internal systems.
172+
Respond with one JSON object only (no markdown, no code fences).
173+
Use exactly this shape: {"subject": "<=8 words>", "line": "<=25 words, friendly>"}
151174
```
152175

153-
**`workflows/chat.yaml`**
176+
### `workflows/support_snippet.yaml`
177+
178+
The compose step passes the echo step’s payload into the model via **`${steps.context.output.echo...}`** (see §13.1 in **`DESIGN_DOC.md`**).
179+
180+
**CLI-driven product (requires `--input`).** If you use **`${input.product}`** anywhere in the workflow, you **must** pass **`--input product=...`** on **`run`**. Otherwise interpolation fails with **`undefined path "input.product"`** because the run input object is empty.
154181

155182
```yaml
156183
apiVersion: agentic.dev/v0
157184
kind: Workflow
158185
metadata:
159-
name: chat
186+
name: support_snippet
160187
spec:
161188
policy: default
162189
steps:
163-
- id: reply
164-
agent: assistant
190+
- id: context
191+
uses: tool.helper.echo
192+
with:
193+
product: "${input.product}"
194+
policy_line: "30-day returns on all SKUs; free outbound shipping on defects."
195+
- id: compose
196+
agent: support_writer
165197
with:
166-
topic: "Say hello in one short sentence."
198+
product: "${input.product}"
199+
return_policy: "${steps.context.output.echo.policy_line}"
200+
output:
201+
value:
202+
product: ${input.product}
203+
subject: ${steps.compose.output.subject}
204+
line: ${steps.compose.output.line}
167205
```
168206

207+
**Zero-argument demo.** To run **`agentctl run workflow/support_snippet`** with no **`--input`**, put a literal product on the first step and thread it through **`steps.context.output.echo`** (the checked-in [**`examples/example1/`**](../examples/example1/) tree uses **`${input.product}`** instead, so it **requires** **`--input product=...`** unless you edit the YAML):
208+
209+
```yaml
210+
- id: context
211+
uses: tool.helper.echo
212+
with:
213+
product: "ACME USB-C hub" # literal default; or "${input.product}" + --input product=...
214+
policy_line: "30-day returns on all SKUs; free outbound shipping on defects."
215+
- id: compose
216+
agent: support_writer
217+
with:
218+
product: "${steps.context.output.echo.product}"
219+
return_policy: "${steps.context.output.echo.policy_line}"
220+
output:
221+
value:
222+
product: ${steps.context.output.echo.product}
223+
subject: ${steps.compose.output.subject}
224+
line: ${steps.compose.output.line}
225+
```
226+
227+
### Commands
228+
229+
If you copied the files to another folder, point **`--project`** at that path instead. For the [**in-repo example**](../examples/example1/), from the **repository root** use **`examples/example1`** (the directory path), not only the project name **`example1`**.
230+
231+
```bash
232+
export OPENAI_API_KEY="sk-..." # required for any step that calls the model
233+
234+
agentctl validate --project examples/example1
235+
agentctl plan --project examples/example1
236+
agentctl apply --project examples/example1 --auto-approve
237+
238+
# Checked-in example1 workflow uses ${input.product} on the context step:
239+
agentctl run workflow/support_snippet --project examples/example1 --input product="ACME USB-C hub"
240+
241+
# After switching the workflow to a literal product + steps.context... (see above), you can omit --input.
242+
```
243+
244+
Default **`run`** output is still **Run ID + status**. To see the workflow **`spec.output`** object ( **`product`**, **`subject`**, **`line`**, etc.):
245+
246+
```bash
247+
agentctl logs --run <run-id> --project examples/example1
248+
```
249+
250+
After the trace table, the CLI prints **Workflow output (from spec.output)** as indented JSON when the run succeeded and **`output_json`** is non-empty.
251+
252+
Or list recent runs as JSON (includes **`output`** on each run):
253+
169254
```bash
170-
agentctl run workflow/chat --project my-agent-system
255+
agentctl logs -o json --project examples/example1
171256
```
172257

173-
Optional: add **`spec.output.schema`** on the agent (path relative to project root) to validate the JSON against JSON Schema; see test fixtures under `internal/engine/testdata/` and **`design_doc.md`**.
258+
**`agentctl logs --run <id> -o json`** also includes top-level **`input`**, **`output`**, and **`workflowName`** alongside **`events`**.
259+
260+
Optional: add **`spec.output.schema`** on the agent (path relative to the project root) so replies are validated with JSON Schema; see `internal/engine/testdata/wfproj/schemas/` and **`DESIGN_DOC.md`**.
174261

175262
---
176263

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
apiVersion: agentic.dev/v0
2+
kind: Agent
3+
metadata:
4+
name: support_writer
5+
spec:
6+
model: openai/gpt-4o-mini
7+
policy: default
8+
constraints:
9+
timeoutSeconds: 60
10+
instructions: |
11+
You draft short customer-facing email lines for a storefront.
12+
You receive JSON in the user message: product name and a return-policy line from internal systems.
13+
Respond with one JSON object only (no markdown, no code fences).
14+
Use exactly this shape: {"subject": "<=8 words>", "line": "<=25 words, friendly>"}
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
apiVersion: agentic.dev/v0
2+
kind: Policy
3+
metadata:
4+
name: default
5+
spec:
6+
execution:
7+
maxWallClockSeconds: 300
8+
maxTotalCostUsd: 5

examples/example1/project.yaml

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
apiVersion: agentic.dev/v0
2+
kind: Project
3+
metadata:
4+
name: example1
5+
spec:
6+
imports:
7+
- ./policies/default.yaml
8+
- ./tools/helper.yaml
9+
- ./agents/support_writer.yaml
10+
- ./workflows/support_snippet.yaml
11+
defaults:
12+
policy: default
13+
model: openai/gpt-4o-mini
14+
providers:
15+
models:
16+
mock:
17+
type: mock
18+
openai:
19+
type: openai
20+
apiKeyFrom: env:OPENAI_API_KEY
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
apiVersion: agentic.dev/v0
2+
kind: Tool
3+
metadata:
4+
name: helper
5+
spec:
6+
type: native
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
apiVersion: agentic.dev/v0
2+
kind: Workflow
3+
metadata:
4+
name: support_snippet
5+
spec:
6+
policy: default
7+
steps:
8+
- id: context
9+
uses: tool.helper.echo
10+
with:
11+
product: "${input.product}"
12+
policy_line: "30-day returns on all SKUs; free outbound shipping on defects."
13+
- id: compose
14+
agent: support_writer
15+
with:
16+
product: "${steps.context.output.echo.product}"
17+
return_policy: "${steps.context.output.echo.policy_line}"
18+
output:
19+
value:
20+
product: ${steps.context.output.echo.product}
21+
subject: ${steps.compose.output.subject}
22+
line: ${steps.compose.output.line}

internal/models/models_test.go

Lines changed: 30 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ package models
33
import (
44
"context"
55
"encoding/json"
6+
"math"
67
"net/http"
78
"net/http/httptest"
89
"strings"
@@ -118,13 +119,13 @@ func TestOpenAIClient_Generate_usesChatCompletions(t *testing.T) {
118119
t.Errorf("Authorization %q", auth)
119120
}
120121
w.Header().Set("Content-Type", "application/json")
121-
_, _ = w.Write([]byte(`{"choices":[{"message":{"content":"hello"}}]}`))
122+
_, _ = w.Write([]byte(`{"choices":[{"message":{"content":"hello"}}],"usage":{"prompt_tokens":1000,"completion_tokens":500}}`))
122123
}))
123124
defer srv.Close()
124125

125126
c := &OpenAIClient{APIKey: "sk-mock", BaseURL: srv.URL + "/v1", HTTPClient: srv.Client()}
126127
resp, err := c.Generate(context.Background(), GenerateRequest{
127-
Model: "gpt-4.1",
128+
Model: "gpt-4o-mini",
128129
Messages: []ChatMessage{
129130
{Role: "user", Content: "hi"},
130131
},
@@ -135,6 +136,33 @@ func TestOpenAIClient_Generate_usesChatCompletions(t *testing.T) {
135136
if resp.Content != "hello" {
136137
t.Fatalf("content %q", resp.Content)
137138
}
139+
// 1000/1e6*0.15 + 500/1e6*0.60 = 0.00045
140+
want := 1000.0/1e6*0.15 + 500.0/1e6*0.60
141+
if math.Abs(resp.Meta.CostUSD-want) > 1e-9 {
142+
t.Fatalf("CostUSD got %v want %v", resp.Meta.CostUSD, want)
143+
}
144+
}
145+
146+
func TestOpenAIClient_Generate_unknownModel_zeroCost(t *testing.T) {
147+
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
148+
w.Header().Set("Content-Type", "application/json")
149+
_, _ = w.Write([]byte(`{"choices":[{"message":{"content":"x"}}],"usage":{"prompt_tokens":100,"completion_tokens":100}}`))
150+
}))
151+
defer srv.Close()
152+
153+
c := &OpenAIClient{APIKey: "sk-mock", BaseURL: srv.URL + "/v1", HTTPClient: srv.Client()}
154+
resp, err := c.Generate(context.Background(), GenerateRequest{
155+
Model: "unknown-model-xyz",
156+
Messages: []ChatMessage{
157+
{Role: "user", Content: "hi"},
158+
},
159+
})
160+
if err != nil {
161+
t.Fatal(err)
162+
}
163+
if resp.Meta.CostUSD != 0 {
164+
t.Fatalf("CostUSD %v", resp.Meta.CostUSD)
165+
}
138166
}
139167

140168
func TestResolveAPIKeyFrom_env(t *testing.T) {

internal/models/openai.go

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -93,16 +93,26 @@ func (c *OpenAIClient) Generate(ctx context.Context, req GenerateRequest) (Gener
9393
Content string `json:"content"`
9494
} `json:"message"`
9595
} `json:"choices"`
96+
Usage *struct {
97+
PromptTokens int `json:"prompt_tokens"`
98+
CompletionTokens int `json:"completion_tokens"`
99+
} `json:"usage"`
96100
}
97101
if err := json.Unmarshal(b, &out); err != nil {
98102
return GenerateResponse{}, fmt.Errorf("models: decode openai response: %w", err)
99103
}
100104
if len(out.Choices) == 0 {
101105
return GenerateResponse{}, fmt.Errorf("models: openai returned no choices")
102106
}
107+
var pt, ct int
108+
if out.Usage != nil {
109+
pt = out.Usage.PromptTokens
110+
ct = out.Usage.CompletionTokens
111+
}
112+
cost := estimateOpenAIChatCostUSD(req.Model, pt, ct)
103113
return GenerateResponse{
104114
Content: out.Choices[0].Message.Content,
105-
Meta: GenerateMeta{DurationMs: time.Since(start).Milliseconds(), CostUSD: 0},
115+
Meta: GenerateMeta{DurationMs: time.Since(start).Milliseconds(), CostUSD: cost},
106116
}, nil
107117
}
108118

0 commit comments

Comments
 (0)