From 37da612c4fd5d056cb30124118192178eab49cac Mon Sep 17 00:00:00 2001 From: Quantstruct Bot Date: Tue, 17 Jun 2025 14:42:58 -0700 Subject: [PATCH] Add changelog for 2025-06-16 --- fern/changelog/2025-06-14.mdx | 3 +++ fern/changelog/2025-06-15.mdx | 3 +++ fern/changelog/2025-06-16.mdx | 17 +++++++++++++++++ 3 files changed, 23 insertions(+) create mode 100644 fern/changelog/2025-06-14.mdx create mode 100644 fern/changelog/2025-06-15.mdx create mode 100644 fern/changelog/2025-06-16.mdx diff --git a/fern/changelog/2025-06-14.mdx b/fern/changelog/2025-06-14.mdx new file mode 100644 index 000000000..4faf53d60 --- /dev/null +++ b/fern/changelog/2025-06-14.mdx @@ -0,0 +1,3 @@ +# Access to `chat` Object in Server Messages + +1. **Access to `chat` Object in Server Messages**: You can now access the `chat` object within various server messages, providing additional context about the conversation. diff --git a/fern/changelog/2025-06-15.mdx b/fern/changelog/2025-06-15.mdx new file mode 100644 index 000000000..7f75eaa74 --- /dev/null +++ b/fern/changelog/2025-06-15.mdx @@ -0,0 +1,3 @@ +# New Storage Credentials Providers + +1. **New Storage Provider Credentials Added**: You can now use new credential types [`S3Credential`](https://api.vapi.ai/api#:~:text=S3Credential), [`GcpCredential`](https://api.vapi.ai/api#:~:text=GcpCredential), [`AzureCredential`](https://api.vapi.ai/api#:~:text=AzureCredential), [`SupabaseCredential`](https://api.vapi.ai/api#:~:text=SupabaseCredential), and [`CloudflareCredential`](https://api.vapi.ai/api#:~:text=CloudflareCredential) to integrate with various storage services. This expands your options for storing data seamlessly across different providers. \ No newline at end of file diff --git a/fern/changelog/2025-06-16.mdx b/fern/changelog/2025-06-16.mdx new file mode 100644 index 000000000..8b39d9517 --- /dev/null +++ b/fern/changelog/2025-06-16.mdx @@ -0,0 +1,17 @@ +# New Model Selection, Enhanced Edge Conditions, Simplified Credentials, and More + +1. **New Model Selection in Workflows**: You can now specify the AI model used in workflows by setting the `model` property in workflow schemas. This allows choosing between OpenAI, Anthropic, Google, or custom models to better suit application requirements. + +2. **Enhanced Workflow Edge Conditions**: Workflows now support [`Logic Edge Conditions`](https://api.vapi.ai/api#:~:text=LogicEdgeCondition) and [`Failed Edge Conditions`](https://api.vapi.ai/api#:~:text=FailedEdgeCondition) for edges. Specify logic edge conditions with [Liquid JS templates](https://liquidjs.com/) to enable more complex logic and error handling within workflows, allowing for dynamic and responsive workflow designs. + +3. **Simplified Credential Configuration**: Your uploaded credentials are now automatically configured with the correct fallback index, simplifying the setup process with cloud providers. + +4. **Updated End Reasons for ElevenLabs**: The following `endedReason` values been removed from `Call`: + - `pipeline-error-eleven-labs-503-server-error` + - `call.in-progress.error-providerfault-eleven-labs-503-server-error` + + You should update your error handling code to reflect the current set of possible end reasons. + + +**Prompt Length Limitations**: The `globalPrompt` in workflows now has a maximum length of 5000 characters, and the `liquid` property in `LogicEdgeCondition` now has a maximum length of 1000 characters. Ensure prompts and conditions stay within these limits to prevent errors. +