diff --git a/.agents/skills/constructive-chatbot/SKILL.md b/.agents/skills/constructive-chatbot/SKILL.md
new file mode 100644
index 0000000..b91d739
--- /dev/null
+++ b/.agents/skills/constructive-chatbot/SKILL.md
@@ -0,0 +1,318 @@
+---
+name: constructive-chatbot
+description: "AI chatbot widget from the @constructive registry — install via shadcn, set up the API route, configure providers, add data-chat-* attributes for page context scraping, define server/client tools, and customize tool UI. Use when adding a chatbot to a Next.js app, building tool-calling flows, wiring up embeddings, or configuring page-aware AI chat."
+compatibility: React 19, Next.js 15+, Vercel AI SDK, Tailwind CSS v4
+metadata:
+ author: constructive-io
+ version: "1.0.0"
+---
+
+# Constructive Chatbot
+
+AI chat widget distributed via the @constructive shadcn registry. Includes page context scraping, tool calling (server + client), approval flows, and configurable LLM/embeddings providers.
+
+## When to Apply
+
+Use this skill when:
+- Installing the chatbot widget in a Next.js app
+- Setting up the chat API route and LLM provider
+- Adding `data-chat-*` attributes to expose page context to the AI
+- Defining tools (server-side or client-side) for the chatbot to call
+- Customizing tool UI (labels, icons, approval badges)
+- Configuring embeddings for RAG-powered chat
+- Debugging tool lifecycle or approval flows
+
+## Install
+
+### 1. Registry config
+
+Add the `@constructive` registry to `components.json`:
+
+```json
+{
+ "registries": {
+ "@constructive": "https://constructive-io.github.io/dashboard/r/{name}.json"
+ }
+}
+```
+
+### 2. Install the block
+
+```bash
+npx shadcn@latest add @constructive/chat
+```
+
+This installs:
+- Components into `src/components/chat/`
+- API route at `src/app/api/chat/route.ts`
+- Test route at `src/app/api/chat/test/route.ts`
+
+### 3. CSS setup
+
+In `globals.css`:
+
+```css
+@plugin "@tailwindcss/typography";
+@import "@constructive-io/ui/globals.css";
+```
+
+### 4. Layout setup
+
+In your root `layout.tsx`:
+
+```tsx
+import { ChatProvider, ChatWidget } from '@/components/chat';
+import { PortalRoot } from '@/components/ui/portal';
+
+export default function RootLayout({ children }: { children: React.ReactNode }) {
+ return (
+
+
+
+ {children}
+
+
+
+
+
+ );
+}
+```
+
+`ChatProvider` accepts an optional `config` prop — all fields have sensible defaults:
+
+```tsx
+
+```
+
+## API Route
+
+The installed route at `src/app/api/chat/route.ts` handles:
+- LLM provider creation (Anthropic or OpenAI-compatible)
+- System prompt with scraped page context
+- Tool registration from the tool registry
+- Streaming response via Vercel AI SDK
+
+The route receives `providerConfig` and `embeddingsConfig` from the client (stored in the Settings panel). No server-side env vars required — the user configures everything in the UI.
+
+See [configuration.md](./references/configuration.md) for provider presets and API route details.
+
+## Page Context Scraping
+
+The chatbot automatically scrapes DOM elements with `data-chat-*` attributes and sends them as context to the LLM.
+
+### Adding context to your page
+
+Mark elements with `data-chat-component` and optional `data-chat-*` attributes:
+
+```html
+
+
...
+
+
+
...
+
+
+
+
+
+
...
+```
+
+### How scraping works
+
+1. On each message send, the scraper queries all `[data-chat-component]` elements
+2. For each visible element, it collects the component name and all `data-chat-*` attributes
+3. The scraped nodes are sent as `context` in the API request body
+4. The API route injects them into the system prompt:
+ ```
+ Page context:
+ - pricing-table: {"plan":"Pro","price":"$29/mo","features":"unlimited-projects,api-access,priority-support"}
+ - user-table: {"total-rows":"142","sort":"created_at:desc","filters":"role:admin,status:active"}
+ ```
+
+**Rules:**
+- Only visible elements are scraped (checked via `offsetParent`)
+- Max 50 nodes per scrape
+- The `data-chat-` prefix is stripped from attribute keys
+- Set `scrape: false` in config to disable
+
+See [scraping.md](./references/scraping.md) for advanced patterns and best practices.
+
+## Tools
+
+The chatbot supports tool calling via a registry pattern. Tools can run on the server (API route) or client (browser), with optional user approval.
+
+### Defining tools
+
+Edit `src/components/chat/tool-registry.ts`:
+
+```ts
+import { z } from 'zod';
+import type { ToolEntry } from './tool-registry';
+
+// Server tool — executes in the API route
+toolRegistry.search_docs = {
+ description: 'Search the knowledge base for relevant documents',
+ inputSchema: z.object({ query: z.string() }),
+ type: 'server',
+ needsApproval: false,
+ execute: async ({ query }) => {
+ const results = await searchVectorStore(query);
+ return JSON.stringify({ results });
+ },
+};
+
+// Client tool — executes in the browser, requires approval
+toolRegistry.send_email = {
+ description: 'Send an email to a recipient',
+ inputSchema: z.object({
+ to: z.string().email(),
+ subject: z.string(),
+ body: z.string(),
+ }),
+ type: 'client',
+ needsApproval: true,
+ execute: async ({ to, subject, body }) => {
+ await fetch('/api/email', {
+ method: 'POST',
+ body: JSON.stringify({ to, subject, body }),
+ });
+ return `Email sent to ${to}`;
+ },
+};
+```
+
+### Server vs Client tools
+
+| | Server | Client |
+|---|---|---|
+| **Where** | API route (Node.js) | Browser |
+| **Use for** | DB queries, API calls, embeddings search | UI actions, user-facing side effects |
+| **`needsApproval`** | Usually `false` | Usually `true` |
+| **`execute`** | Runs in `route.ts` | Runs in `ToolMessage` component |
+
+### Tool lifecycle
+
+```
+input-streaming → input-available → [approval-requested → approval-responded] → output-available
+ → output-denied
+ → output-error
+```
+
+Tools with `needsApproval: true` show an approval card. After approval, client tools execute in the browser; server tools execute in the API route.
+
+See [tool-system.md](./references/tool-system.md) for lifecycle details, approval flow, auto-continuation, and UI customization.
+
+### Customizing tool UI
+
+Edit `src/components/chat/tool-ui-config.ts` to override labels, icons, and approval badges per tool:
+
+```ts
+import { CloudSun, Mail } from 'lucide-react';
+
+const toolUIRegistry = {
+ search_docs: {
+ labels: { streaming: 'Searching...', done: 'Found results' },
+ icon: Search,
+ },
+ send_email: {
+ labels: { streaming: 'Preparing email...', executing: 'Sending...', done: 'Email sent', error: 'Send failed' },
+ icon: Mail,
+ approval: {
+ badge: { label: 'Send Email', icon: Mail, variant: 'create' },
+ },
+ },
+};
+```
+
+## Embeddings Setup
+
+The chatbot includes embeddings provider configuration for RAG workflows. The Settings panel has a dedicated "Embeddings Model" section.
+
+### How to wire embeddings into tools
+
+1. User configures embeddings provider in Settings (OpenAI or OpenAI-compatible)
+2. The `embeddingsConfig` is sent with every API request
+3. In your API route or server tool, use the config to generate/query embeddings:
+
+```ts
+// In a server tool's execute function or custom API route logic:
+toolRegistry.search_docs = {
+ description: 'Search documents using semantic similarity',
+ inputSchema: z.object({ query: z.string() }),
+ type: 'server',
+ needsApproval: false,
+ execute: async ({ query }) => {
+ // Access embeddingsConfig from the request body
+ // (extend the API route to pass it through)
+ const embedding = await generateEmbedding(query, embeddingsConfig);
+ const results = await vectorStore.similaritySearch(embedding);
+ return JSON.stringify({ results });
+ },
+};
+```
+
+### Supported providers
+
+| Provider | Preset model | Dimensions |
+|----------|-------------|------------|
+| OpenAI | `text-embedding-3-small` | 1536 |
+| OpenAI Compatible (Ollama, etc.) | `nomic-embed-text` | 768 |
+
+See [configuration.md](./references/configuration.md) for the full provider presets and settings shape.
+
+## Architecture
+
+```
+ChatProvider (context + state)
+├── ChatWidget (positioning shell)
+│ ├── ChatPanel (main view)
+│ │ ├── ChatMessages (message list + tool rendering)
+│ │ │ ├── ChatMessageContent (markdown rendering)
+│ │ │ └── ToolMessage (tool lifecycle + approval)
+│ │ │ ├── ToolStatus (spinner/check/error + shimmer)
+│ │ │ └── ToolApprovalCard (approve/reject UI)
+│ │ ├── ChatInput (message input)
+│ │ └── ChatSettings (provider config panel)
+│ └── ChatFab (floating trigger button)
+```
+
+### Key files after install
+
+| File | Purpose |
+|------|---------|
+| `src/components/chat/index.ts` | Public exports |
+| `src/components/chat/chat.types.ts` | Types, defaults, provider presets |
+| `src/components/chat/chat-context.tsx` | React context, state, Vercel AI SDK integration |
+| `src/components/chat/tool-registry.ts` | Tool definitions (consumer edits this) |
+| `src/components/chat/tool-ui-config.ts` | Per-tool UI overrides (consumer edits this) |
+| `src/components/chat/tool-message.tsx` | Tool rendering, approval cards, client execution |
+| `src/components/chat/dom-scraper.ts` | Page context scraping logic |
+| `src/app/api/chat/route.ts` | API route (LLM + tools) |
+| `src/app/api/chat/test/route.ts` | Connection test endpoint |
+
+## Reference Guide
+
+- [tool-system.md](./references/tool-system.md) — Tool registry, lifecycle states, approval flow, auto-continuation, client execution
+- [configuration.md](./references/configuration.md) — Provider presets, ChatConfig, API route internals, embeddings config
+- [scraping.md](./references/scraping.md) — data-chat-* attributes, scraper behavior, context patterns
diff --git a/.agents/skills/constructive-chatbot/references/configuration.md b/.agents/skills/constructive-chatbot/references/configuration.md
new file mode 100644
index 0000000..3b148c7
--- /dev/null
+++ b/.agents/skills/constructive-chatbot/references/configuration.md
@@ -0,0 +1,218 @@
+# Configuration
+
+## ChatConfig
+
+Passed to ``. All fields are optional with sensible defaults.
+
+```ts
+interface ChatConfig {
+ api?: string; // API endpoint (default: "/api/chat")
+ scrape?: boolean; // Enable DOM scraping (default: true)
+ title?: string; // Chat header title (default: "AI Chat")
+ subtitle?: string; // Empty-state subtitle (default: "Ask anything about this page.")
+ suggestions?: string[]; // Quick-start prompts
+ storageKey?: string; // localStorage key (default: "chat-widget-settings")
+}
+```
+
+## LLM Provider Settings
+
+Configured by the user via the Settings panel in the chat UI. Stored in localStorage.
+
+```ts
+interface LLMSettings {
+ provider: 'anthropic' | 'openai-compat';
+ apiKey: string;
+ baseUrl: string;
+ model: string;
+}
+```
+
+### Provider presets
+
+| Provider | Label | Default Base URL | Default Model |
+|----------|-------|-----------------|---------------|
+| `anthropic` | Anthropic | `https://api.anthropic.com` | `claude-sonnet-4-20250514` |
+| `openai-compat` | OpenAI Compatible | `http://localhost:11434/v1` | `gpt-4o` |
+
+The "OpenAI Compatible" provider works with OpenAI, Ollama, vLLM, or any API that implements the OpenAI chat completions format.
+
+### How settings flow
+
+1. User configures in Settings panel → saved to localStorage
+2. On each message send, `ChatProvider` reads from `settingsRef.current`
+3. Sent as `providerConfig` in the API request body
+4. API route calls `createModel(providerConfig)` to instantiate the provider
+
+## Embeddings Provider Settings
+
+Configured in the Settings panel under "Embeddings Model". Stored separately in localStorage (`${storageKey}-embeddings`).
+
+```ts
+interface EmbeddingsSettings {
+ provider: 'openai' | 'openai-compat';
+ apiKey: string;
+ baseUrl: string;
+ model: string;
+ dimensions: number;
+}
+```
+
+### Embeddings presets
+
+| Provider | Label | Default Base URL | Default Model | Dimensions |
+|----------|-------|-----------------|---------------|------------|
+| `openai` | OpenAI | `https://api.openai.com/v1` | `text-embedding-3-small` | 1536 |
+| `openai-compat` | OpenAI Compatible | `http://localhost:11434/v1` | `nomic-embed-text` | 768 |
+
+### Using embeddings in tools
+
+The `embeddingsConfig` is sent with every API request alongside `providerConfig`. To use it in a server tool:
+
+1. Extend the API route to extract `embeddingsConfig` from the request body (it's already sent)
+2. Pass it to your tool's execute function or use it in a custom route handler
+
+```ts
+// In your API route, embeddingsConfig is available:
+const embeddingsConfig = body.embeddingsConfig;
+
+// Use it to generate embeddings for RAG:
+import { embed } from 'ai';
+import { createOpenAI } from '@ai-sdk/openai';
+
+const embeddingModel = createOpenAI({
+ apiKey: embeddingsConfig.apiKey,
+ baseURL: embeddingsConfig.baseUrl,
+}).embedding(embeddingsConfig.model);
+
+const { embedding } = await embed({
+ model: embeddingModel,
+ value: query,
+});
+```
+
+### RAG tool example
+
+A complete server tool that uses embeddings for document search:
+
+```ts
+toolRegistry.search_docs = {
+ description: 'Search knowledge base using semantic similarity',
+ inputSchema: z.object({ query: z.string() }),
+ type: 'server',
+ needsApproval: false,
+ execute: async ({ query }) => {
+ // 1. Generate embedding for the query
+ const embedding = await generateEmbedding(query);
+ // 2. Query vector store (pgvector, Pinecone, etc.)
+ const results = await vectorStore.similaritySearch(embedding, { limit: 5 });
+ // 3. Return results for the LLM to use
+ return JSON.stringify({ results: results.map(r => ({ title: r.title, content: r.content })) });
+ },
+};
+```
+
+## API Route
+
+### Request body
+
+The API route receives:
+
+```ts
+{
+ messages: UIMessage[]; // Chat history
+ providerConfig: { // LLM settings from UI
+ provider: 'anthropic' | 'openai-compat';
+ apiKey: string;
+ baseUrl: string;
+ model: string;
+ };
+ embeddingsConfig: { // Embeddings settings from UI
+ provider: 'openai' | 'openai-compat';
+ apiKey: string;
+ baseUrl: string;
+ model: string;
+ dimensions: number;
+ };
+ context: ScrapedNode[]; // Scraped page context
+}
+```
+
+### Model creation
+
+```ts
+function createModel(config: ProviderConfig) {
+ if (config.provider === 'anthropic') {
+ const anthropic = createAnthropic({ apiKey: config.apiKey });
+ return anthropic(config.model || 'claude-sonnet-4-20250514');
+ }
+ // OpenAI-compatible
+ const provider = createOpenAICompatible({
+ name: 'openai-compat',
+ baseURL: config.baseUrl.replace(/\/$/, ''),
+ apiKey: config.apiKey || undefined,
+ });
+ return provider.chatModel(config.model || 'gpt-4o');
+}
+```
+
+### System prompt
+
+The system prompt includes scraped page context:
+
+```ts
+function buildSystemPrompt(context: ScrapedNode[]) {
+ let prompt = 'You are a helpful AI assistant embedded in a web page. Be concise and helpful.';
+ if (context.length > 0) {
+ prompt += '\n\nPage context:\n';
+ prompt += context.map(n => `- ${n.component}: ${JSON.stringify(n.attributes)}`).join('\n');
+ }
+ return prompt;
+}
+```
+
+### streamText configuration
+
+```ts
+const result = streamText({
+ model,
+ system: buildSystemPrompt(context),
+ messages,
+ tools: buildTools(),
+ maxOutputTokens: 4096,
+ stopWhen: stepCountIs(2), // Max 2 tool-use rounds
+ temperature: 0.7,
+});
+```
+
+`stopWhen: stepCountIs(2)` prevents infinite tool-calling loops — the LLM gets at most 2 rounds of tool use before it must respond with text.
+
+### Test route
+
+The test route at `src/app/api/chat/test/route.ts` validates the provider config by making a minimal LLM call. Used by the "Test Connection" button in Settings.
+
+## Settings Persistence
+
+- LLM settings: `localStorage.getItem('chat-widget-settings')`
+- Embeddings settings: `localStorage.getItem('chat-widget-settings-embeddings')`
+- Custom key: set `storageKey` in `ChatConfig` to change the prefix
+
+Settings are loaded on mount and saved on every change. The `settingsRef` pattern ensures the transport always uses the latest settings without recreating the transport object.
+
+## Local Development with Ollama
+
+For local development without API keys:
+
+1. Install [Ollama](https://ollama.ai)
+2. Pull a model: `ollama pull llama3.2`
+3. Pull an embeddings model: `ollama pull nomic-embed-text`
+4. In chat Settings:
+ - Provider: "OpenAI Compatible"
+ - Base URL: `http://localhost:11434/v1`
+ - Model: `llama3.2`
+ - API Key: (leave empty)
+5. For embeddings:
+ - Provider: "OpenAI Compatible"
+ - Base URL: `http://localhost:11434/v1`
+ - Model: `nomic-embed-text`
+ - Dimensions: `768`
diff --git a/.agents/skills/constructive-chatbot/references/scraping.md b/.agents/skills/constructive-chatbot/references/scraping.md
new file mode 100644
index 0000000..f913b18
--- /dev/null
+++ b/.agents/skills/constructive-chatbot/references/scraping.md
@@ -0,0 +1,197 @@
+# Page Context Scraping
+
+The chatbot scrapes DOM elements marked with `data-chat-*` attributes to give the LLM awareness of what's on the page.
+
+## How It Works
+
+1. User sends a message
+2. `ChatProvider` calls `scrapePageContext()` before sending the API request
+3. The scraper queries all elements with `[data-chat-component]`
+4. For each visible element, it collects the component name and all `data-chat-*` attributes
+5. The results are sent as `context` in the request body
+6. The API route injects them into the system prompt
+
+## The `data-chat-*` Attribute Convention
+
+### Required: `data-chat-component`
+
+Every element you want the chatbot to see must have `data-chat-component`:
+
+```html
+
...
+```
+
+This identifies the element type. The value should be a descriptive name the LLM can understand.
+
+### Optional: `data-chat-*` attributes
+
+Add any `data-chat-*` attribute to provide structured data:
+
+```html
+
...
+```
+
+The `data-chat-` prefix is stripped, so the LLM sees:
+
+```json
+{"total": "$142.50", "items": "3", "status": "pending"}
+```
+
+## Scraper Rules
+
+- **Visibility**: only elements with a truthy `offsetParent` are scraped (hidden elements are skipped). Exception: `position: fixed` elements are always included.
+- **Max nodes**: capped at 50 elements per scrape to keep the context window manageable.
+- **Prefix stripping**: `data-chat-` is removed from attribute keys. `data-chat-component` is used as the node name, not included in attributes.
+- **Values are strings**: all attribute values are strings. The LLM interprets them in context.
+
+## Patterns
+
+### Static page content
+
+Annotate key sections so the chatbot can answer "what's on this page":
+
+```html
+
+
Pricing Plans
+
+
+
+ ...
+
+
+
+ ...
+
+```
+
+The LLM receives:
+
+```
+Page context:
+- page-header: {"title":"Pricing Plans"}
+- plan: {"name":"Starter","price":"Free","limits":"1 project, 100 API calls/day"}
+- plan: {"name":"Pro","price":"$29/mo","limits":"Unlimited projects, 10k API calls/day"}
+```
+
+### Dynamic data views
+
+Annotate data grids and tables with summary metadata:
+
+```tsx
+
+
+
+```
+
+### Navigation state
+
+Let the chatbot know where the user is:
+
+```tsx
+
+
+
+ ...
+
+```
+
+### Forms
+
+Annotate forms so the chatbot can help with them:
+
+```tsx
+
+```
+
+### Conditional context
+
+Only add attributes when relevant:
+
+```tsx
+
+ ...
+
+```
+
+## Best Practices
+
+1. **Be descriptive with component names** — use `user-profile-card` not `card1`
+2. **Include actionable data** — the LLM can reference specific values in its responses
+3. **Keep values concise** — long paragraphs of text as attribute values waste context
+4. **Use commas for lists** — `data-chat-tags="react,typescript,next"` is cleaner than JSON arrays
+5. **Stringify objects sparingly** — flat key-value attributes are easier for the LLM to parse
+6. **Don't over-annotate** — mark the 5-10 most important elements, not every div. The 50-node cap exists for a reason.
+7. **Use dynamic values** — bind React state to attributes so context updates automatically:
+
+```tsx
+
+```
+
+## Disabling Scraping
+
+Pass `scrape: false` to `ChatProvider`:
+
+```tsx
+
+```
+
+When disabled, no `context` is sent in the API request, and the system prompt omits the "Page context" section.
+
+## How Context Reaches the LLM
+
+```
+DOM elements with data-chat-*
+ |
+ scrapePageContext() → ScrapedNode[]
+ |
+ ChatProvider transport → { context: ScrapedNode[], ... }
+ |
+ API route POST body → body.context
+ |
+ buildSystemPrompt(context) → system prompt string
+ |
+ streamText({ system: ... }) → LLM sees page context
+```
+
+The `ScrapedNode` type:
+
+```ts
+interface ScrapedNode {
+ component: string; // from data-chat-component
+ attributes: Record; // from other data-chat-* attrs
+}
+```
diff --git a/.agents/skills/constructive-chatbot/references/tool-system.md b/.agents/skills/constructive-chatbot/references/tool-system.md
new file mode 100644
index 0000000..2f3ecdf
--- /dev/null
+++ b/.agents/skills/constructive-chatbot/references/tool-system.md
@@ -0,0 +1,278 @@
+# Tool System
+
+The chatbot's tool system lets the LLM call functions during a conversation. Tools are defined in a shared registry and can execute on the server (API route) or client (browser).
+
+## Tool Registry
+
+All tools live in `src/components/chat/tool-registry.ts`:
+
+```ts
+import { z } from 'zod';
+
+export interface ToolEntry {
+ description: string;
+ inputSchema: z.ZodType;
+ type: 'server' | 'client';
+ needsApproval: boolean;
+ execute: (input: TInput) => Promise;
+}
+
+export const toolRegistry: Record> = {};
+```
+
+Add tools by assigning to `toolRegistry`:
+
+```ts
+toolRegistry.get_weather = {
+ description: 'Get current weather for a city',
+ inputSchema: z.object({ city: z.string() }),
+ type: 'server',
+ needsApproval: false,
+ execute: async ({ city }) =>
+ JSON.stringify({ city, temp: '22C', condition: 'Sunny' }),
+};
+```
+
+The registry is imported by both the API route (for server tools) and the client (for client tool execution and UI rendering).
+
+## Server vs Client Tools
+
+### Server tools (`type: 'server'`)
+
+- `execute` runs inside the API route's `streamText` call
+- The LLM receives the result and can use it in its response
+- Ideal for: database queries, vector search, external API calls
+- Usually set `needsApproval: false` (transparent to the user)
+
+```ts
+toolRegistry.search_products = {
+ description: 'Search product catalog',
+ inputSchema: z.object({ query: z.string(), limit: z.number().optional() }),
+ type: 'server',
+ needsApproval: false,
+ execute: async ({ query, limit = 5 }) => {
+ const results = await db.products.search(query, limit);
+ return JSON.stringify(results);
+ },
+};
+```
+
+### Client tools (`type: 'client'`)
+
+- `execute` runs in the browser after user approval
+- The API route registers them without an `execute` function (input only)
+- Ideal for: sending emails, creating records, triggering UI actions
+- Usually set `needsApproval: true`
+
+```ts
+toolRegistry.create_ticket = {
+ description: 'Create a support ticket',
+ inputSchema: z.object({
+ title: z.string(),
+ description: z.string(),
+ priority: z.enum(['low', 'medium', 'high']),
+ }),
+ type: 'client',
+ needsApproval: true,
+ execute: async (input) => {
+ const res = await fetch('/api/tickets', {
+ method: 'POST',
+ headers: { 'Content-Type': 'application/json' },
+ body: JSON.stringify(input),
+ });
+ const ticket = await res.json();
+ return JSON.stringify({ success: true, ticketId: ticket.id });
+ },
+};
+```
+
+## How Tools Are Registered in the API Route
+
+The API route's `buildTools()` function reads the registry and creates Vercel AI SDK tool definitions:
+
+```ts
+function buildTools() {
+ const entries = Object.entries(toolRegistry);
+ if (entries.length === 0) return undefined;
+
+ return Object.fromEntries(
+ entries.map(([name, entry]) => [
+ name,
+ entry.type === 'server'
+ ? tool({
+ description: entry.description,
+ inputSchema: entry.inputSchema,
+ needsApproval: entry.needsApproval || undefined,
+ execute: async (input) => entry.execute(input),
+ })
+ : tool({
+ description: entry.description,
+ inputSchema: entry.inputSchema,
+ needsApproval: entry.needsApproval,
+ // No execute — client handles it
+ }),
+ ]),
+ );
+}
+```
+
+## Tool Lifecycle States
+
+Each tool invocation progresses through states:
+
+```
+input-streaming LLM is generating the tool's input arguments
+ |
+input-available Input is fully received
+ |
+ +-- needsApproval: false --> execute immediately
+ |
+ +-- needsApproval: true
+ |
+ approval-requested Approval card shown to user
+ |
+ +-- Approve --> approval-responded (approved: true)
+ | |
+ | +-- server tool: API route executes
+ | +-- client tool: browser executes via useEffect
+ | |
+ | output-available Success
+ | output-error Execute threw an error
+ |
+ +-- Reject --> output-denied (via addToolOutput with {rejected: true})
+```
+
+## Approval Flow
+
+When a tool has `needsApproval: true`:
+
+1. **Approval card appears** — shows the tool name, badge, and input preview
+2. **User clicks Approve or Reject**
+ - **Approve**: calls `addToolApprovalResponse({ id, approved: true })`
+ - **Reject**: calls `addToolOutput(toolCallId, JSON.stringify({ rejected: true }))`
+3. **Client tool execution**: a `useEffect` in `ToolMessage` watches for `approval-responded` state with `approved: true`, then calls `entry.execute(input)`
+4. **Result reporting**: the execute result is sent via `addToolOutput(toolCallId, result)`
+
+### One-shot guard
+
+The `executedRef` in `ToolMessage` prevents double-execution:
+
+```ts
+const executedRef = useRef(false);
+useEffect(() => {
+ if (state !== 'approval-responded' || !approval?.approved || executedRef.current) return;
+ executedRef.current = true;
+ // ... execute
+}, [state, approval?.approved]);
+```
+
+## Auto-Continuation
+
+After a tool completes (approved or auto-executed), the chat needs to send the result back to the LLM so it can continue responding. This is handled by `sendAutomaticallyWhen` in `chat-context.tsx`:
+
+```ts
+sendAutomaticallyWhen: ({ messages }) => {
+ // Check if all approval-required tools are resolved
+ // Use one-shot guard (autoSentRef) to prevent infinite loops
+ // Skip if any tool was rejected
+ // Return true to auto-send once all tools are done
+}
+```
+
+Key behaviors:
+- Waits until ALL tool parts in the last assistant message are resolved
+- Fires only once per approval cycle (one-shot `autoSentRef`)
+- Resets the guard when new approval parts appear
+- Skips auto-send if any tool was rejected (the rejection message is the final response)
+
+## Tool UI Customization
+
+Each tool's visual appearance is customizable via `src/components/chat/tool-ui-config.ts`:
+
+```ts
+export interface ToolUIConfig {
+ /** Status text at each lifecycle stage */
+ labels: {
+ streaming: string; // While LLM streams tool input
+ executing: string; // While execute() runs
+ done: string; // On success
+ error: string; // On failure
+ };
+ /** Icon in the approval card header */
+ icon: LucideIcon;
+ /** Colored badge in the approval card */
+ approval?: {
+ badge: {
+ label: string; // e.g. "Send Email"
+ icon: LucideIcon; // Small icon in badge
+ variant: 'create' | 'update' | 'delete'; // green / blue / red
+ };
+ };
+ /** Custom renderer for tool input preview */
+ renderPreview?: (input: any) => React.ReactNode;
+}
+```
+
+### Default UI
+
+If no override is set, tools use:
+- Labels: "Working..." / "Executing..." / "Done" / "Failed"
+- Icon: `Wrench`
+- No approval badge
+- Key-value `DefaultPreview` for input display
+
+### Per-tool overrides
+
+Add entries to `toolUIRegistry`:
+
+```ts
+const toolUIRegistry = {
+ search_docs: {
+ labels: { streaming: 'Searching...', done: 'Found results' },
+ icon: Search,
+ },
+ delete_record: {
+ labels: { executing: 'Deleting...', done: 'Deleted', error: 'Delete failed' },
+ icon: Trash2,
+ approval: {
+ badge: { label: 'Delete Record', icon: Trash2, variant: 'delete' },
+ },
+ },
+};
+```
+
+Only the fields you provide are overridden; the rest fall back to defaults.
+
+### Custom preview
+
+For tools where the default key-value preview isn't ideal:
+
+```ts
+toolUIRegistry.create_chart = {
+ icon: BarChart,
+ renderPreview: (input) => (
+
+
{input.title}
+
{input.type} chart with {input.data?.length} data points
+
+ ),
+};
+```
+
+## ToolMessage Component
+
+`ToolMessage` renders the full tool lifecycle:
+
+1. **Streaming** — `ToolStatus` with shimmer text
+2. **Approval pending** — `ToolApprovalCard` with badge + preview + Approve/Reject buttons
+3. **Executing** — `ToolStatus` spinner with "Executing..." label
+4. **Done** — `ToolStatus` green check with done label
+5. **Error** — `ExpandableError` with clickable details
+6. **Rejected** — `ToolStatus` error with "Rejected" label
+
+### ToolStatus component
+
+Renders a compact status line with icon + text:
+- `loading`: spinning Loader2 icon (via `motion/react`) + shimmer text
+- `done`: green Check icon
+- `error`: red CircleAlert icon
diff --git a/CLAUDE.md b/CLAUDE.md
index 4b48ecd..4382afa 100644
--- a/CLAUDE.md
+++ b/CLAUDE.md
@@ -6,7 +6,7 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
A collection of skills for AI coding agents working with Constructive tooling. Skills are packaged instructions and scripts that extend agent capabilities for GraphQL development workflows, following the [Agent Skills](https://agentskills.io/) format.
-## Available Skills (12 Umbrella Skills)
+## Available Skills (13 Umbrella Skills)
| Skill | Description |
|-------|-------------|
@@ -19,6 +19,7 @@ A collection of skills for AI coding agents working with Constructive tooling. S
| **constructive-frontend** | UI components (50+ on Base UI + Tailwind v4), CRUD Stack cards, dynamic `_meta` forms |
| **constructive-testing** | All test frameworks — pgsql-test, drizzle-orm-test, supabase-test, Drizzle ORM patterns, pgsql-parser |
| **constructive-ai** | AI capabilities — pgvector RAG pipelines, embeddings, Ollama CI/CD |
+| **constructive-chatbot** | AI chatbot widget — shadcn registry install, page scraping (`data-chat-*`), tool system (server/client), approval flows, embeddings/RAG |
| **constructive-tooling** | Dev tools — pnpm workspaces, inquirerer CLI framework, README formatting |
| **graphile-search** | Unified search plugin internals — SearchAdapter, tsvector/BM25/trgm/pgvector adapters (team-level) |
| **fbp** | Flow-Based Programming — types, spec, evaluator, graph editor |