| title | Debugging voice agents |
|---|---|
| subtitle | Learn to identify, diagnose, and fix common issues with your voice assistants and workflows |
| slug | debugging |
Voice agents involve multiple AI systems working together—speech recognition, language models, and voice synthesis. When something goes wrong, systematic debugging helps you quickly identify and fix the root cause.
Most common issues fall into these categories:
* Agent doesn't understand user input correctly * Responses are inappropriate or inconsistent * Agent sounds robotic or unnatural * Call quality issues or audio problems * Tool integrations failing or returning errors * Workflow logic not executing as expectedStart with these immediate checks before diving deeper:
Test your voice agent directly in the [dashboard](https://dashboard.vapi.ai/):<CardGroup cols={2}>
<Card title="Assistants" icon="robot">
Click "Talk to Assistant" to test
</Card>
<Card title="Workflows" icon="diagram-project">
Click "Call" to test workflow
</Card>
</CardGroup>
**Benefits:**
- Eliminates phone network variables
- Provides real-time transcript view
- Shows tool execution results immediately
<CardGroup cols={3}>
<Card title="Call Logs" icon="phone">
Review call transcripts, durations, and error messages
</Card>
<Card title="API Logs" icon="code">
Check API requests and responses for integration issues
</Card>
<Card title="Webhook Logs" icon="webhook">
Verify webhook deliveries and server responses
</Card>
</CardGroup>
<CardGroup cols={2}>
<Card title="Voice Test Suites" icon="vial">
Automated testing for assistants
</Card>
<Card title="Tool Testing" icon="wrench">
Test tools with sample data
</Card>
</CardGroup>
**Core Services:**
- Visit [Vapi Status Page](https://status.vapi.ai/) for Vapi service status
**Provider Status Pages:**
- [OpenAI Status](https://status.openai.com/) for OpenAI language models
- [Anthropic Status](https://status.anthropic.com/) for Anthropic language models
- [ElevenLabs Status](https://status.elevenlabs.io/) for ElevenLabs voice synthesis
- [Deepgram Status](https://status.deepgram.com/) for Deepgram speech-to-text
- [Gladia Status](https://status.gladia.io/) for Gladia speech-to-text
- And other providers' status pages as needed
The Vapi dashboard provides powerful debugging features to help you identify and fix issues quickly:
Navigate to Observe > Call Logs to:
- Review complete call transcripts
- Check call duration and completion status
- Identify where calls failed or ended unexpectedly
- See tool execution results and errors
- Analyze conversation flow in workflows
<video autoPlay loop muted src="./static/videos/debugging/call-logs.mp4" type="video/mp4" style={{ aspectRatio: '16 / 9', width: '100%' }} />
Navigate to Observe > API Logs to:
- Monitor all API requests and responses
- Check for authentication errors
- Verify request payloads and response codes
- Debug integration issues with external services
<video autoPlay loop muted src="./static/videos/debugging/api-logs.mp4" type="video/mp4" style={{ aspectRatio: '16 / 9', width: '100%' }} />
Navigate to Observe > Webhook Logs to:
- Verify webhook deliveries to your server
- Check server response codes and timing
- Debug webhook authentication issues
- Monitor event delivery failures
<video autoPlay loop muted src="./static/videos/debugging/webhook-logs.mp4" type="video/mp4" style={{ aspectRatio: '16 / 9', width: '100%' }} />
Use the Vapi CLI to forward webhooks to your local development server:
# Terminal 1: Create tunnel (e.g., with ngrok)
ngrok http 4242
# Terminal 2: Forward webhooks
vapi listen --forward-to localhost:3000/webhookNavigate to Test > Voice Test Suites to:
- Run automated tests on your assistants (not available for workflows)
- Test conversation flows with predefined scenarios
- Verify assistant behavior across different inputs
- Monitor performance over time
<video autoPlay loop muted src="./static/videos/debugging/voice-test-suites.mp4" type="video/mp4" style={{ aspectRatio: '16 / 9', width: '100%' }} />
For any tool in your Tools section:
- Navigate to
Tools > [Select Tool] - Use the
Testbutton to send sample payloads - Verify tool responses and error handling
- Debug parameter extraction and API calls
<video autoPlay loop muted src="./static/videos/debugging/tool-testing.mp4" type="video/mp4" style={{ aspectRatio: '16 / 9', width: '100%' }} />
| Problem | Symptoms | Solution |
|---|---|---|
| Transcription accuracy | Incorrect words in transcripts, missing words/phrases, poor performance with accents | Switch to more accurate transcriber. |
| Intent recognition | Agent responds to wrong intent, fails to extract variables, workflow routing to wrong nodes | Make system prompt / node prompt more specific; use clear enum values; adjust the temperature to ensure consistent outputs |
| Response quality | Different responses to identical inputs, agent forgets context, doesn't follow instructions | Review system prompt / node prompt specificity; check model configuration; adjust temperature to achieve consistency |
Debug steps for response quality:
- Review system prompt - Navigate to your assistant/workflow in the dashboard and check the system prompt specificity
- Check model configuration - Scroll down to
Modelsection and verify:- You're using an appropriate model (e.g.,
gpt-4o) Max Tokensis sufficient for response length- Necessary tools are enabled and configured correctly
- You're using an appropriate model (e.g.,
| Response Issue | Solution |
|---|---|
| Responses too long | Add "Keep responses under X words" to system prompt |
| Robotic speech | Switch to a different voice provider |
| Forgetting context | Use models with larger context windows |
| Wrong information | Check tool outputs and knowledge base accuracy via Call Logs |
| Problem Type | Issue | Solution |
|---|---|---|
| Tool execution | Tools failing, HTTP errors, parameter issues | Navigate to Observe > Call Logs and check tool execution section, test tools individually at Tools > [Select Tool] > Test, validate configuration |
| Variable extraction | Variables not extracted, wrong values, missing data | Be specific in variable descriptions, use distinct enum values, add validation prompts |
| Workflow logic | Wrong node routing, conditions not triggering, variables not passing | Use Call Logs to trace conversation path, verify edge conditions are clear, check global node conflicts |
Variable extraction details:
| Problem | Cause | Solution |
|---|---|---|
| Variables not extracted | Unclear description | Be specific in variable descriptions: "Customer's 10-digit phone number" |
| Wrong variable values | Ambiguous enum options | Use distinct enum values: "schedule", "cancel", "reschedule" |
| Missing required variables | User didn't provide info | Add validation prompts to request missing data |
| Error Pattern | Likely Cause | Quick Fix |
|---|---|---|
| Agent misinterpreting speech | Speech recognition issue | Check transcriber model, add custom keyterms |
| Irrelevant responses | Poor prompt engineering | Be more specific in system prompt |
| Call drops immediately | Configuration error | Check all required fields in assistant/workflow settings |
| Tool errors | API integration issue | Test tools individually, verify endpoint URLs |
| Long silences | Model processing delay | Use faster models or reduce response length |
When you're stuck:
Join the Vapi Discord for real-time help from the community and team<Card title="API Reference" icon="book" href="/api-reference"
Check the API reference for detailed configuration options
<Card title="Status Page" icon="fa-light fa-heartbeat" href="https://status.vapi.ai/"
Check real-time status of Vapi services and AI providers
Before asking for help: