You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Monitoring lets you automatically track call quality and detect issues across your voice AI agents. Instead of manually reviewing calls, you define monitors that continuously evaluate your call data against thresholds and alert you when something goes wrong.
9
+
Monitoring lets you automatically track quality and detect issues across your voice AI agents. Instead of manually reviewing calls, you define monitors that continuously evaluate your call data against thresholds and alert you when something goes wrong.
10
10
11
11
### What is monitoring?
12
12
13
-
Monitoring is Vapi's automated quality assurance system for voice AI. You create monitors that periodically evaluate call data using analytics queries (Insights), compare results against thresholds you define, and generate issues when those thresholds are exceeded. Your team receives alerts through email, Slack, or webhooks so you can investigate and resolve problems quickly.
13
+
Monitoring is Vapi's automated quality and effectiveness system for voice AI. You create monitors that periodically evaluate call data using analytics queries (Insights), compare results against thresholds you define, and generate issues when those thresholds are exceeded. Your team receives alerts through notifiers, which are alert channels such as email, Slack, or webhooks, so you can investigate and resolve problems quickly.
14
14
15
15
### Core concepts
16
16
17
17
-**Monitors** define what to watch, which assistants to target, and when to evaluate
18
18
-**Triggers** run on a schedule and evaluate call data against thresholds
19
-
-**Issues** are created when thresholds are exceeded, tracking the problem from detection to resolution
20
-
-**Alerts**notify you via email, Slack, or webhookwhen issues arise
19
+
-**Issues** are created when a threshold is exceeded and a trigger fires, tracking the problem from detection to resolution
20
+
-**Notifiers**are alert channels (email, Slack, or webhook) that deliver notifications to your team when issues are created
21
21
22
22
### Monitor categories
23
23
@@ -46,11 +46,11 @@ In this quickstart, you will create a monitor that tracks error rates across you
46
46
</Step>
47
47
48
48
<Steptitle="Issues are created">
49
-
When a threshold is exceeded, an issue is created with details about the affected calls, the trigger that fired, and the evaluation window.
49
+
When a threshold is exceeded and a trigger fires, an issue is created with details about the affected calls, the trigger that fired, and the evaluation window.
50
50
</Step>
51
51
52
-
<Steptitle="Alerts notify your team">
53
-
If alerts are enabled on the trigger, your team receives notifications via email, Slack, or webhook with issue details.
52
+
<Steptitle="Notifiers alert your team">
53
+
If notifiers are configured on the trigger, your team receives notifications via email, Slack, or webhook with issue details.
54
54
</Step>
55
55
56
56
<Steptitle="Investigate and resolve">
@@ -77,7 +77,7 @@ In this quickstart, you will create a monitor that tracks error rates across you
77
77
78
78
## Step 1: Set up notifiers
79
79
80
-
Notifiers are alert channels that send notifications when issues are detected. You configure them as credentials in the Dashboard.
80
+
Notifiers are alert channels (email, Slack, or webhook) that send notifications when issues are detected. You configure them as credentials in the Dashboard.
81
81
82
82
<Tabs>
83
83
<Tabtitle="Dashboard">
@@ -146,8 +146,8 @@ Define a monitor that tracks error rates and alerts you when errors exceed a thr
146
146
4. Set **Check frequency** to every **1 hour**
147
147
</Step>
148
148
149
-
<Steptitle="Enable alerts">
150
-
1. Toggle **Alerts** to enabled
149
+
<Steptitle="Enable notifiers">
150
+
1. Toggle **Notifiers** to enabled
151
151
2. Select one or more notifiers from the dropdown
152
152
3. Click **Save Monitor**
153
153
</Step>
@@ -234,9 +234,20 @@ Save the returned `id` for referencing this monitor later.
234
234
create the Insight first and reference its ID here.
235
235
</Note>
236
236
237
-
### Targeting specific assistants
237
+
### Managing monitors
238
238
239
-
To monitor specific assistants instead of all assistants, use the `targets` array:
239
+
View, edit, and delete all your monitors from the **Monitors** page in the Dashboard sidebar (under Observe).
240
+
241
+
Via the API:
242
+
-`GET /monitoring/monitor` — list all monitors
243
+
-`PATCH /monitoring/monitor/:id` — update a monitor's targets, triggers, or thresholds
244
+
-`DELETE /monitoring/monitor/:id` — remove a monitor
245
+
246
+
### Targeting assistants
247
+
248
+
Setting `"targets": "*"` monitors **all current and future assistants** in your organization. Any assistant created after the monitor is set up is automatically included.
249
+
250
+
To monitor only specific assistants, pass an array of assistant IDs using the `targets` array. The `id` field is the assistant ID — the UUID you see in the Dashboard or get from `GET /assistant`.
240
251
241
252
```json title="Specific assistant targets"
242
253
{
@@ -278,7 +289,7 @@ This trigger evaluates at 9:00 AM and 5:00 PM on weekdays.
278
289
279
290
## Step 3: View issues
280
291
281
-
When a trigger fires and the threshold is exceeded, an issue is created. You can view and manage issues in the Dashboard or via the API.
292
+
When a threshold is exceeded and a trigger fires, an issue is created. You can view and manage issues in the Dashboard or via the API.
282
293
283
294
<Tabs>
284
295
<Tabtitle="Dashboard">
@@ -466,9 +477,21 @@ curl -X POST "https://api.vapi.ai/monitoring/issue/e5f6a7b8-c9d0-1234-efab-56789
466
477
return the cached result immediately.
467
478
</Note>
468
479
480
+
<Tip>
481
+
Use the `callId` values from the issue's `calls` array to review specific call
482
+
logs and recordings for deeper investigation. Each call ID links directly to the
483
+
call details in your Dashboard.
484
+
</Tip>
485
+
469
486
## Step 5: Resolve an issue
470
487
471
-
After investigating and fixing the underlying problem, acknowledge and resolve the issue to track your team's response.
488
+
After investigating and fixing the underlying problem, acknowledge and resolve the issue to track your team's response. Resolve an issue once you've deployed a fix and confirmed the problem no longer recurs — this signals to your team that the root cause has been addressed. Acknowledgment and resolution timestamps help measure your team's incident response times.
489
+
490
+
<Note>
491
+
Issues are a single shared resource. Status changes made in the Dashboard are
492
+
immediately reflected in API responses, and vice versa. Your team can freely use
| No issues are being created | Verify monitor status is "active". Check that the trigger interval and threshold are configured correctly. Ensure your assistants have recent call data. |
560
-
| Alerts not received | Confirm alert status is "enabled" on the trigger. Verify credential IDs reference valid notifiers. Check the notifier configuration (email address, webhook URL, Slack webhook). |
583
+
| Alerts not received | Confirm notifiers are enabled on the trigger. Verify credential IDs reference valid notifiers. Check the notifier configuration (email address, webhook URL, Slack webhook). |
561
584
| Analysis returns an error | Ensure the issue has associated calls. Analysis requires call data to identify patterns. |
562
585
| Monitor not evaluating | Check that the `insightId` references a valid Insight. Verify the trigger interval is at least 1 minute. |
563
586
| Wrong calls detected | Review the Insight query that the trigger references. Ensure the monitor targets the correct assistants. |
564
587
| Duplicate issues | Triggers create new issues per evaluation window. This is expected behavior when a problem persists across multiple evaluation periods. |
565
588
566
589
<Warning>
567
-
If alerts show a `"failure"` status in the issue's alerts array, the
568
-
notification delivery failed. Check your notifier credentials and ensure the
590
+
If an alert shows a `"failure"` status in the issue's alerts array, the
591
+
notification delivery failed. Check your notifier configuration and ensure the
569
592
destination (email, Slack webhook, URL) is reachable.
0 commit comments