|
| 1 | +--- |
| 2 | +title: Optimization workflows |
| 3 | +subtitle: Use observability data to continuously improve your assistant |
| 4 | +slug: observability/optimization-workflows |
| 5 | +--- |
| 6 | + |
| 7 | +## What is optimization? |
| 8 | + |
| 9 | +**Optimization** is the continuous improvement loop: using observability data to refine prompts, improve tool calls, and enhance conversation flows. |
| 10 | + |
| 11 | +Unlike the previous stages (INSTRUMENT, TEST, EXTRACT, MONITOR), **OPTIMIZE is not a dedicated tool or feature** — it's a workflow that combines tools from all previous stages to drive systematic improvement. |
| 12 | + |
| 13 | +**The optimization mindset**: Voice AI quality improves through iteration, not perfection. The best teams: |
| 14 | +- Start with "good enough" (not perfect) |
| 15 | +- Deploy to production with instrumentation and monitoring |
| 16 | +- Use real-world data to identify improvement opportunities |
| 17 | +- Test changes before deploying |
| 18 | +- Track impact systematically |
| 19 | + |
| 20 | +**Why optimization matters**: Without a systematic optimization workflow, teams either: |
| 21 | +- ❌ Over-engineer before launch (trying to predict every edge case) |
| 22 | +- ❌ React to problems ad-hoc (fixing symptoms, not root causes) |
| 23 | +- ❌ Stagnate after launch (no process for continuous improvement) |
| 24 | + |
| 25 | +**The goal**: Establish a repeatable workflow that turns observability data into measurable improvements. |
| 26 | + |
| 27 | +--- |
| 28 | + |
| 29 | +## Optimization workflow at a glance |
| 30 | + |
| 31 | +| Stage | Tools Used | What you do | |
| 32 | +|-------|-----------|-------------| |
| 33 | +| **1. Detect patterns** | Boards, Insights API, Analytics API | Spot trends in monitoring dashboards (success rate dropping, cost increasing, etc.) | |
| 34 | +| **2. Extract details** | Webhooks, Structured Outputs, Transcripts | Pull call data to understand WHY the pattern exists | |
| 35 | +| **3. Form hypothesis** | Manual analysis | Identify root cause (e.g., "prompt doesn't handle edge case X") | |
| 36 | +| **4. Make changes** | Assistant configuration | Update prompts, tools, routing logic based on hypothesis | |
| 37 | +| **5. Test changes** | Evals, Simulations | Validate improvement before deploying to production | |
| 38 | +| **6. Deploy** | API, Dashboard | Push updated assistant to production | |
| 39 | +| **7. Verify** | Boards, Insights API | Track target metric to confirm improvement | |
| 40 | + |
| 41 | +This is a **continuous cycle**, not a one-time activity: |
| 42 | + |
| 43 | +``` |
| 44 | +MONITOR → EXTRACT → Analyze → Revise → TEST → Deploy → MONITOR (repeat) |
| 45 | +``` |
| 46 | + |
| 47 | +<span className="vapi-validation">Confirm this optimization workflow accurately reflects how Vapi customers typically iterate on their assistants. Are there tools or stages we're missing? Should we emphasize certain steps more than others?</span> |
| 48 | + |
| 49 | +--- |
| 50 | + |
| 51 | +## The optimization loop in detail |
| 52 | + |
| 53 | +**[Placeholder - Full detail sections]** |
| 54 | + |
| 55 | +### Step 1: Detect patterns from monitoring |
| 56 | + |
| 57 | +<span className="internal-note">Placeholder for: How to use Boards/analytics to spot trends (success rate drops, cost spikes, etc.). Include example scenario.</span> |
| 58 | + |
| 59 | +--- |
| 60 | + |
| 61 | +### Step 2: Extract detailed data |
| 62 | + |
| 63 | +<span className="internal-note">Placeholder for: Methods for pulling call transcripts, structured outputs, tool call logs. Show how to filter/export data for analysis.</span> |
| 64 | + |
| 65 | +--- |
| 66 | + |
| 67 | +### Step 3: Form a hypothesis |
| 68 | + |
| 69 | +<span className="internal-note">Placeholder for: Common hypothesis patterns (prompt issues, tool description problems, routing logic, verbosity, etc.). Show example hypothesis formation process.</span> |
| 70 | + |
| 71 | +--- |
| 72 | + |
| 73 | +### Step 4: Make targeted changes |
| 74 | + |
| 75 | +<span className="internal-note">Placeholder for: How to revise prompts, update tool descriptions, refine conversation flows. Include before/after examples.</span> |
| 76 | + |
| 77 | +--- |
| 78 | + |
| 79 | +### Step 5: Test before deploying |
| 80 | + |
| 81 | +<span className="internal-note">Placeholder for: Creating Evals for specific failure cases, regression testing strategies. Show example test structure.</span> |
| 82 | + |
| 83 | +--- |
| 84 | + |
| 85 | +### Step 6: Deploy |
| 86 | + |
| 87 | +<span className="internal-note">Placeholder for: Deployment strategies (direct deploy, staged rollout, A/B testing). Include decision framework for choosing strategy.</span> |
| 88 | + |
| 89 | +--- |
| 90 | + |
| 91 | +### Step 7: Verify improvement |
| 92 | + |
| 93 | +<span className="internal-note">Placeholder for: Time windows for verification (immediate, 24h, 1 week), what to track, when to roll back.</span> |
| 94 | + |
| 95 | +--- |
| 96 | + |
| 97 | +## Common optimization scenarios |
| 98 | + |
| 99 | +**[Placeholder - Table of common patterns, root causes, and optimization actions]** |
| 100 | + |
| 101 | +<span className="vapi-validation">What are the most common optimization scenarios Vapi customers encounter? What issues drive the most improvement iterations? Are there voice-specific optimization patterns we should highlight?</span> |
| 102 | + |
| 103 | +--- |
| 104 | + |
| 105 | +## Optimization best practices |
| 106 | + |
| 107 | +**[Placeholder - Full detail sections]** |
| 108 | + |
| 109 | +Topics to cover: |
| 110 | +- Start with high-impact, low-effort changes |
| 111 | +- Track improvement over time (optimization log) |
| 112 | +- Don't optimize prematurely (wait for data) |
| 113 | +- Make one change at a time (clear cause-and-effect) |
| 114 | +- Maintain regression tests |
| 115 | + |
| 116 | +<span className="internal-note">Should we include specific guidance on optimization cadence (weekly reviews, monthly deep dives, quarterly retrospectives)?</span> |
| 117 | + |
| 118 | +--- |
| 119 | + |
| 120 | +## What you'll learn in detailed guides |
| 121 | + |
| 122 | +**Optimization is cross-functional** — it references tools from all previous stages: |
| 123 | +- [Evals quickstart](/observability/evals-quickstart) — Test improvements before deploying |
| 124 | +- [Boards quickstart](/observability/boards-quickstart) — Track metrics over time |
| 125 | +- [Structured outputs quickstart](/assistants/structured-outputs-quickstart) — Extract failure data for analysis |
| 126 | + |
| 127 | +(Planned) Optimization playbook — Common scenarios and solutions |
| 128 | +(Planned) Advanced optimization — A/B testing, staged rollouts, impact measurement |
| 129 | + |
| 130 | +--- |
| 131 | + |
| 132 | +## Key takeaway |
| 133 | + |
| 134 | +**Optimize continuously**. The best teams treat observability as a loop: instrument → test → deploy → monitor → identify improvements → repeat. Data-driven iteration beats guesswork. |
| 135 | + |
| 136 | +Start your optimization practice on day one. Don't wait until you have problems — establish the workflow while things are working, so you're ready when issues arise. |
| 137 | + |
| 138 | +--- |
| 139 | + |
| 140 | +## Next steps |
| 141 | + |
| 142 | +<CardGroup cols={2}> |
| 143 | + <Card |
| 144 | + title="Boards quickstart" |
| 145 | + icon="chart-line" |
| 146 | + href="/observability/boards-quickstart" |
| 147 | + > |
| 148 | + Set up monitoring to detect patterns |
| 149 | + </Card> |
| 150 | + |
| 151 | + <Card |
| 152 | + title="Evals quickstart" |
| 153 | + icon="clipboard-check" |
| 154 | + href="/observability/evals-quickstart" |
| 155 | + > |
| 156 | + Build tests to validate improvements |
| 157 | + </Card> |
| 158 | + |
| 159 | + <Card |
| 160 | + title="Production readiness" |
| 161 | + icon="check-circle" |
| 162 | + href="/observability/production-readiness" |
| 163 | + > |
| 164 | + Validate you're ready to optimize in production |
| 165 | + </Card> |
| 166 | + |
| 167 | + <Card |
| 168 | + title="Back to overview" |
| 169 | + icon="arrow-left" |
| 170 | + href="/observability/framework" |
| 171 | + > |
| 172 | + Return to observability framework |
| 173 | + </Card> |
| 174 | +</CardGroup> |
0 commit comments