Skip to content

Commit 07c7eba

Browse files
ci: apply automated fixes
1 parent f1034e0 commit 07c7eba

1 file changed

Lines changed: 32 additions & 32 deletions

File tree

src/blog/tanstack-ai-code-mode.md

Lines changed: 32 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: "Code Mode: Let Your AI Write Programs, Not Just Call Tools"
2+
title: 'Code Mode: Let Your AI Write Programs, Not Just Call Tools'
33
published: 2026-04-08
44
excerpt: One tool call at a time is the bottleneck. TanStack AI Code Mode lets the LLM write and execute TypeScript programs in secure sandboxes, composing your tools with loops, conditionals, and Promise.all in a single shot.
55
authors:
@@ -51,21 +51,21 @@ Without Code Mode, the LLM calls `getTopProducts`, waits for the result, then ca
5151
With Code Mode, the LLM writes this:
5252

5353
```typescript
54-
const top = await external_getTopProducts({ limit: 5 });
54+
const top = await external_getTopProducts({ limit: 5 })
5555

5656
const ratings = await Promise.all(
5757
top.products.map((p) => external_getProductRatings({ productId: p.id })),
58-
);
58+
)
5959

6060
return top.products.map((product, i) => {
61-
const scores = ratings[i].ratings.map((r) => r.score);
62-
const avg = scores.reduce((sum, s) => sum + s, 0) / scores.length;
61+
const scores = ratings[i].ratings.map((r) => r.score)
62+
const avg = scores.reduce((sum, s) => sum + s, 0) / scores.length
6363
return {
6464
name: product.name,
6565
sales: product.totalSales,
6666
averageRating: Math.round(avg * 100) / 100,
67-
};
68-
});
67+
}
68+
})
6969
```
7070

7171
One tool call. Five API fetches in parallel. Math computed in JavaScript, not in the model. The averages are correct to the penny. The context window savings compound fast: every round-trip you eliminate is hundreds of tokens you don't spend.
@@ -100,43 +100,43 @@ pnpm add @tanstack/ai-isolate-cloudflare
100100
Same `toolDefinition()` API you already use. Nothing changes here:
101101

102102
```typescript
103-
import { toolDefinition } from "@tanstack/ai";
104-
import { z } from "zod";
103+
import { toolDefinition } from '@tanstack/ai'
104+
import { z } from 'zod'
105105

106106
const fetchWeather = toolDefinition({
107-
name: "fetchWeather",
108-
description: "Get current weather for a city",
107+
name: 'fetchWeather',
108+
description: 'Get current weather for a city',
109109
inputSchema: z.object({ location: z.string() }),
110110
outputSchema: z.object({
111111
temperature: z.number(),
112112
condition: z.string(),
113113
}),
114114
}).server(async ({ location }) => {
115-
const res = await fetch(`https://api.weather.example/v1?city=${location}`);
116-
return res.json();
117-
});
115+
const res = await fetch(`https://api.weather.example/v1?city=${location}`)
116+
return res.json()
117+
})
118118
```
119119

120120
### Create Code Mode and use it with `chat()`
121121

122122
```typescript
123-
import { chat } from "@tanstack/ai";
124-
import { openaiText } from "@tanstack/ai-openai";
125-
import { createCodeMode } from "@tanstack/ai-code-mode";
126-
import { createNodeIsolateDriver } from "@tanstack/ai-isolate-node";
123+
import { chat } from '@tanstack/ai'
124+
import { openaiText } from '@tanstack/ai-openai'
125+
import { createCodeMode } from '@tanstack/ai-code-mode'
126+
import { createNodeIsolateDriver } from '@tanstack/ai-isolate-node'
127127

128128
const { tool, systemPrompt } = createCodeMode({
129129
driver: createNodeIsolateDriver(),
130130
tools: [fetchWeather],
131131
timeout: 30_000,
132-
});
132+
})
133133

134134
const result = await chat({
135-
adapter: openaiText("gpt-4o"),
136-
systemPrompts: ["You are a helpful assistant.", systemPrompt],
135+
adapter: openaiText('gpt-4o'),
136+
systemPrompts: ['You are a helpful assistant.', systemPrompt],
137137
tools: [tool],
138138
messages,
139-
});
139+
})
140140
```
141141

142142
`createCodeMode` returns two things: the `execute_typescript` tool and a system prompt containing typed function stubs for every tool you passed in. The model sees exact input/output types, so it generates correct calls without guessing parameter shapes. TypeScript annotations are stripped automatically before execution.
@@ -176,30 +176,30 @@ Right now the model rewrites the same logic every time. If it figures out a good
176176
**High-level**: `codeModeWithSkills()` handles everything. Skill selection via a cheap LLM call, tool registry assembly, system prompt generation.
177177

178178
```typescript
179-
import { codeModeWithSkills } from "@tanstack/ai-code-mode-skills";
180-
import { createFileSkillStorage } from "@tanstack/ai-code-mode-skills/storage";
181-
import { createNodeIsolateDriver } from "@tanstack/ai-isolate-node";
182-
import { openaiText } from "@tanstack/ai-openai";
179+
import { codeModeWithSkills } from '@tanstack/ai-code-mode-skills'
180+
import { createFileSkillStorage } from '@tanstack/ai-code-mode-skills/storage'
181+
import { createNodeIsolateDriver } from '@tanstack/ai-isolate-node'
182+
import { openaiText } from '@tanstack/ai-openai'
183183

184-
const storage = createFileSkillStorage({ directory: "./.skills" });
184+
const storage = createFileSkillStorage({ directory: './.skills' })
185185

186186
const { toolsRegistry, systemPrompt } = await codeModeWithSkills({
187187
config: {
188188
driver: createNodeIsolateDriver(),
189189
tools: [myTool1, myTool2],
190190
timeout: 60_000,
191191
},
192-
adapter: openaiText("gpt-4o-mini"), // cheap model for skill selection
192+
adapter: openaiText('gpt-4o-mini'), // cheap model for skill selection
193193
skills: { storage, maxSkillsInContext: 5 },
194194
messages,
195-
});
195+
})
196196

197197
const stream = chat({
198-
adapter: openaiText("gpt-4o"), // strong model for reasoning
198+
adapter: openaiText('gpt-4o'), // strong model for reasoning
199199
toolRegistry: toolsRegistry,
200200
messages,
201-
systemPrompts: ["You are a helpful assistant.", systemPrompt],
202-
});
201+
systemPrompts: ['You are a helpful assistant.', systemPrompt],
202+
})
203203
```
204204

205205
**Manual**: use `createCodeMode`, `skillsToTools`, and `createSkillManagementTools` individually when you want full control over which skills load and how they're assembled.

0 commit comments

Comments
 (0)