2222 "ExternalLlm" ,
2323 "FallbackConfig" ,
2424 "FallbackConfigExternalLlm" ,
25+ "Integration" ,
26+ "InterruptionSettings" ,
27+ "InterruptionSettingsStartSpeakingPlan" ,
28+ "InterruptionSettingsStartSpeakingPlanTranscriptionEndpointingPlan" ,
29+ "McpServer" ,
2530 "PostConversationSettings" ,
2631]
2732
@@ -34,27 +39,31 @@ class AssistantCreateParams(TypedDict, total=False):
3439 [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables)
3540 """
3641
37- model : Required [str ]
38- """ID of the model to use.
39-
40- You can use the
41- [Get models API](https://developers.telnyx.com/api-reference/chat/get-available-models)
42- to see all of your available models,
43- """
44-
4542 name : Required [str ]
4643
4744 description : str
4845
4946 dynamic_variables : Dict [str , object ]
5047 """Map of dynamic variables and their default values"""
5148
49+ dynamic_variables_webhook_timeout_ms : int
50+ """Timeout in milliseconds for the dynamic variables webhook.
51+
52+ Must be between 1 and 10000 ms. If the webhook does not respond within this
53+ timeout, the call proceeds with default values. See the
54+ [dynamic variables guide](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables).
55+ """
56+
5257 dynamic_variables_webhook_url : str
5358 """
54- If the dynamic_variables_webhook_url is set for the assistant, we will send a
55- request at the start of the conversation. See our
56- [guide](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables)
57- for more information.
59+ If `dynamic_variables_webhook_url` is set, Telnyx sends a POST request to this
60+ URL at the start of the conversation to resolve dynamic variables. **Gotcha:**
61+ the webhook response must wrap variables under a top-level `dynamic_variables`
62+ object, e.g. `{"dynamic_variables": {"customer_name": "Jane"}}`. Returning a
63+ flat object will be ignored and variables will fall back to their defaults. See
64+ the
65+ [dynamic variables guide](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables)
66+ for the full request/response format and timeout behavior.
5867 """
5968
6069 enabled_features : List [EnabledFeatures ]
@@ -75,17 +84,52 @@ class AssistantCreateParams(TypedDict, total=False):
7584
7685 insight_settings : InsightSettingsParam
7786
78- llm_api_key_ref : str
79- """This is only needed when using third-party inference providers .
87+ integrations : Iterable [ Integration ]
88+ """Connected integrations attached to the assistant .
8089
81- The `identifier` for an integration secret
90+ The catalog of available integrations is at `/ai/integrations`; the user's
91+ connected integrations are at `/ai/integrations/connections`. Each item
92+ references a catalog integration by `integration_id`.
93+ """
94+
95+ interruption_settings : InterruptionSettings
96+ """
97+ Settings for interruptions and how the assistant decides the user has finished
98+ speaking. These timings are most relevant when using non turn-taking
99+ transcription models. For turn-taking models like `deepgram/flux`, end-of-turn
100+ behavior is controlled by the transcription end-of-turn settings under
101+ `transcription.settings` (`eot_threshold`, `eot_timeout_ms`,
102+ `eager_eot_threshold`).
103+ """
104+
105+ llm_api_key_ref : str
106+ """
107+ This is only needed when using third-party inference providers selected by
108+ `model`. The `identifier` for an integration secret
82109 [/v2/integration_secrets](https://developers.telnyx.com/api-reference/integration-secrets/create-a-secret)
83- that refers to your LLM provider's API key. Warning: Free plans are unlikely to
84- work with this integration.
110+ that refers to your LLM provider's API key. For bring-your-own endpoint
111+ authentication, use `external_llm.llm_api_key_ref` instead. Warning: Free plans
112+ are unlikely to work with this integration.
113+ """
114+
115+ mcp_servers : Iterable [McpServer ]
116+ """MCP servers attached to the assistant.
117+
118+ Create MCP servers with `/ai/mcp_servers`, then reference them by `id` here.
85119 """
86120
87121 messaging_settings : MessagingSettingsParam
88122
123+ model : str
124+ """ID of the model to use when `external_llm` is not set.
125+
126+ You can use the
127+ [Get models API](https://developers.telnyx.com/api-reference/chat/get-available-models)
128+ to see available models. If `external_llm` is provided, the assistant uses
129+ `external_llm` instead of this field. If neither `model` nor `external_llm` is
130+ provided, Telnyx applies the default model.
131+ """
132+
89133 observability_settings : ObservabilityReqParam
90134
91135 post_conversation_settings : PostConversationSettings
@@ -100,15 +144,25 @@ class AssistantCreateParams(TypedDict, total=False):
100144
101145 privacy_settings : PrivacySettingsParam
102146
147+ tags : SequenceNotStr [str ]
148+ """Tags associated with the assistant.
149+
150+ Tags can also be managed with the assistant tag endpoints.
151+ """
152+
103153 telephony_settings : TelephonySettingsParam
104154
105155 tool_ids : SequenceNotStr [str ]
156+ """IDs of shared tools to attach to the assistant.
157+
158+ New integrations should prefer `tool_ids` over inline `tools`.
159+ """
106160
107161 tools : Iterable [AssistantToolParam ]
108- """The tools that the assistant can use .
162+ """Deprecated for new integrations .
109163
110- These may be templated with
111- [dynamic variables](https://developers.telnyx.com/docs/inference/ai-assistants/dynamic-variables)
164+ Inline tool definitions available to the assistant. Prefer `tool_ids` to attach
165+ shared tools created with the AI Tools endpoints.
112166 """
113167
114168 transcription : TranscriptionSettingsParam
@@ -137,11 +191,13 @@ class ExternalLlm(TypedDict, total=False):
137191
138192 forward_metadata : bool
139193 """
140- When enabled, Telnyx forwards the assistant's dynamic variables to the external
141- LLM endpoint. Defaults to false. The chat completion request includes a
142- top-level `extra_metadata` object when dynamic variables are available. For
143- example:
144- `{"extra_metadata":{"customer_name":"Jane","account_id":"acct_789","telnyx_agent_target":"+13125550100","telnyx_end_user_target":"+13125550123"}}`.
194+ When `true`, Telnyx forwards the assistant's dynamic variables to the external
195+ LLM endpoint as a top-level `extra_metadata` object on the chat completion
196+ request body. Defaults to `false`. Example payload sent to the external
197+ endpoint:
198+ `{"extra_metadata": {"customer_name": "Jane", "account_id": "acct_789", "telnyx_agent_target": "+13125550100", "telnyx_end_user_target": "+13125550123"}}`.
199+ Distinct from OpenAI's native `metadata` field, which has its own size and type
200+ limits.
145201 """
146202
147203 llm_api_key_ref : str
@@ -171,11 +227,13 @@ class FallbackConfigExternalLlm(TypedDict, total=False):
171227
172228 forward_metadata : bool
173229 """
174- When enabled, Telnyx forwards the assistant's dynamic variables to the external
175- LLM endpoint. Defaults to false. The chat completion request includes a
176- top-level `extra_metadata` object when dynamic variables are available. For
177- example:
178- `{"extra_metadata":{"customer_name":"Jane","account_id":"acct_789","telnyx_agent_target":"+13125550100","telnyx_end_user_target":"+13125550123"}}`.
230+ When `true`, Telnyx forwards the assistant's dynamic variables to the external
231+ LLM endpoint as a top-level `extra_metadata` object on the chat completion
232+ request body. Defaults to `false`. Example payload sent to the external
233+ endpoint:
234+ `{"extra_metadata": {"customer_name": "Jane", "account_id": "acct_789", "telnyx_agent_target": "+13125550100", "telnyx_end_user_target": "+13125550123"}}`.
235+ Distinct from OpenAI's native `metadata` field, which has its own size and type
236+ limits.
179237 """
180238
181239 llm_api_key_ref : str
@@ -200,6 +258,100 @@ class FallbackConfig(TypedDict, total=False):
200258 """
201259
202260
261+ class Integration (TypedDict , total = False ):
262+ """Reference to a connected integration attached to an assistant.
263+
264+ Discover available integrations with `/ai/integrations` and connected integrations with `/ai/integrations/connections`.
265+ """
266+
267+ integration_id : Required [str ]
268+ """Catalog integration ID to attach.
269+
270+ This is the `id` from the integrations catalog at `/ai/integrations` (the same
271+ value also appears as `integration_id` on entries returned by
272+ `/ai/integrations/connections`). It is **not** the connection-level `id` from
273+ `/ai/integrations/connections`.
274+ """
275+
276+ allowed_list : SequenceNotStr [str ]
277+ """Optional per-assistant allowlist of integration tool names.
278+
279+ When omitted or empty, all tools allowed by the connected integration are
280+ available to the assistant.
281+ """
282+
283+
284+ class InterruptionSettingsStartSpeakingPlanTranscriptionEndpointingPlan (TypedDict , total = False ):
285+ """Endpointing thresholds used to decide when the user has finished speaking.
286+
287+ Applies to non turn-taking transcription models. For `deepgram/flux`, use `transcription.settings.eot_threshold` / `eot_timeout_ms` / `eager_eot_threshold`.
288+ """
289+
290+ on_no_punctuation_seconds : float
291+ """Seconds to wait after the transcript ends without punctuation."""
292+
293+ on_number_seconds : float
294+ """Seconds to wait after the transcript ends with a number."""
295+
296+ on_punctuation_seconds : float
297+ """Seconds to wait after the transcript ends with punctuation."""
298+
299+
300+ class InterruptionSettingsStartSpeakingPlan (TypedDict , total = False ):
301+ """Controls when the assistant starts speaking after the user stops.
302+
303+ These thresholds primarily apply to non turn-taking transcription models. For turn-taking models like `deepgram/flux`, end-of-turn detection is driven by the transcription end-of-turn settings under `transcription.settings` instead.
304+ """
305+
306+ transcription_endpointing_plan : InterruptionSettingsStartSpeakingPlanTranscriptionEndpointingPlan
307+ """Endpointing thresholds used to decide when the user has finished speaking.
308+
309+ Applies to non turn-taking transcription models. For `deepgram/flux`, use
310+ `transcription.settings.eot_threshold` / `eot_timeout_ms` /
311+ `eager_eot_threshold`.
312+ """
313+
314+ wait_seconds : float
315+ """Minimum seconds to wait before the assistant starts speaking."""
316+
317+
318+ class InterruptionSettings (TypedDict , total = False ):
319+ """
320+ Settings for interruptions and how the assistant decides the user has finished speaking. These timings are most relevant when using non turn-taking transcription models. For turn-taking models like `deepgram/flux`, end-of-turn behavior is controlled by the transcription end-of-turn settings under `transcription.settings` (`eot_threshold`, `eot_timeout_ms`, `eager_eot_threshold`).
321+ """
322+
323+ enable : bool
324+ """Whether users can interrupt the assistant while it is speaking."""
325+
326+ start_speaking_plan : InterruptionSettingsStartSpeakingPlan
327+ """Controls when the assistant starts speaking after the user stops.
328+
329+ These thresholds primarily apply to non turn-taking transcription models. For
330+ turn-taking models like `deepgram/flux`, end-of-turn detection is driven by the
331+ transcription end-of-turn settings under `transcription.settings` instead.
332+ """
333+
334+
335+ class McpServer (TypedDict , total = False ):
336+ """Reference to an MCP server attached to an assistant.
337+
338+ Create and manage MCP servers with the `/ai/mcp_servers` endpoints, then attach them to assistants by ID.
339+ """
340+
341+ id : Required [str ]
342+ """ID of the MCP server to attach.
343+
344+ This must be the `id` of an MCP server returned by the `/ai/mcp_servers`
345+ endpoints.
346+ """
347+
348+ allowed_tools : SequenceNotStr [str ]
349+ """Optional per-assistant allowlist of MCP tool names.
350+
351+ When omitted, the assistant uses the MCP server's configured `allowed_tools`.
352+ """
353+
354+
203355class PostConversationSettings (TypedDict , total = False ):
204356 """Configuration for post-conversation processing.
205357
0 commit comments