| layout | default |
|---|---|
| title | Chapter 4: Prompt-to-App Workflow |
| nav_order | 4 |
| parent | Bolt.diy Tutorial |
Welcome to Chapter 4: Prompt-to-App Workflow. In this part of bolt.diy Tutorial: Build and Operate an Open Source AI App Builder, you will build an intuitive mental model first, then move into concrete implementation details and practical production tradeoffs.
This chapter explains how to transform natural-language intent into deterministic, reviewable product changes.
A high-quality bolt.diy workflow is not "prompt and pray". It is a controlled loop:
- define target outcome
- constrain scope
- generate minimal patch
- validate with commands
- iterate using evidence
flowchart LR
A[Define Goal and Constraints] --> B[Draft Scoped Prompt]
B --> C[Generate Candidate Changes]
C --> D[Review Diff and Risk]
D --> E[Run Validation Commands]
E --> F{Pass?}
F -- Yes --> G[Accept and Document]
F -- No --> H[Refine Prompt with Failure Evidence]
H --> B
Use this structure for most tasks:
Goal:
Scope (allowed files/directories):
Non-goals (must not change):
Expected behavior:
Validation command(s):
Definition of done:
This simple template dramatically reduces drift.
Improve auth flow.
Problems:
- no scope
- no expected behavior
- no validation command
Refactor token refresh handling in src/auth/session.ts only.
Do not modify routing or UI components.
Maintain current public API.
Run npm test -- auth-session.
Return changed files and test result summary.
Benefits:
- bounded file surface
- explicit constraints
- deterministic acceptance criteria
For multi-step work, break into milestones:
- scaffold interfaces only
- implement one subsystem
- run targeted tests
- integrate cross-module wiring
- run broader validation
Never request architecture redesign and production bugfix in the same first prompt.
When output is wrong, avoid vague feedback like "still broken".
Provide:
- failing command output
- exact expected behavior
- explicit file/function targets
- what should remain unchanged
This creates focused rework rather than broad retries.
| Gate | Question |
|---|---|
| scope gate | Did changes stay inside allowed files? |
| behavior gate | Does output satisfy stated goal? |
| safety gate | Any hidden config/auth/security impact? |
| validation gate | Did specified commands pass? |
| clarity gate | Is summary sufficient for reviewer handoff? |
If multiple engineers share bolt.diy, standardize:
- one prompt template
- one summary format
- one minimal evidence format (command + result)
- one escalation path for risky changes
Consistency matters more than perfect wording.
Symptom: unrelated files modified.
Fix: tighten scope and explicitly forbid unrelated directories.
Symptom: same issue reappears across iterations.
Fix: include exact failing evidence and force minimal patch objective.
Symptom: hard to review what changed.
Fix: require per-file summary plus pass/fail results.
You now have a deterministic prompt-to-app method:
- explicit prompt contracts
- milestone-based iteration
- evidence-driven correction
- consistent acceptance gates
Next: Chapter 5: Files, Diff, and Locking
The action export in app/routes/api.chat.ts is the server-side handler for chat requests. Every prompt submitted through the bolt.diy UI passes through this route. It receives the conversation messages, the selected provider/model, and any constraints from the client, then delegates to the streaming LLM layer.
Understanding this file is key to tracing how a user's prompt becomes a model request, and where you can insert logging, validation, or budget-cap logic before the model call.
The streaming layer in app/lib/llm/stream-text.ts handles the actual LLM call and streams tokens back to the client. It wraps the AI SDK's streamText function and applies provider-specific configuration.
This is where the prompt-to-response pipeline executes. For the prompt-to-app workflow, this is the boundary between "what the user asked" and "what the model generates" — the right place to add timeout controls, stream error recovery, or cost accounting.
The BaseChat component in app/components/chat/BaseChat.tsx is the primary UI container for the prompt input and conversation display. It manages the message list, the input field, and sends requests to api.chat.
For the prompt-to-app workflow, this component defines the user-facing contract: what the user types, how constraints are surfaced, and how the generated output is streamed back into the editor.
flowchart TD
A[User types prompt in BaseChat]
B[Request sent to api.chat.ts action]
C[Provider and model config applied]
D[stream-text.ts calls LLM provider]
E[Tokens stream back to UI]
F[Generated code applied to editor]
A --> B
B --> C
C --> D
D --> E
E --> F