Skip to content

Commit 8b71f94

Browse files
alliscodealliscodeCopilotmoonbox3
authored
Python: Feature/hosted dwf (#5531)
* Fix declarative Workflow.as_agent() by accepting list[Message] in start executor The declarative start executor (JoinExecutor) only advertised dict and str in its input_types, so WorkflowAgent.__init__ rejected it with 'Workflow's start executor cannot handle list[Message]'. Add list[Message] to the JoinExecutor handler annotation and add a matching branch in DeclarativeActionExecutor._ensure_state_initialized that extracts the last user-message text and falls through to the string-input initialization path, so =System.LastMessageText works end-to-end via as_agent(). Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Populate Conversation.messages from list[Message] trigger When Workflow.as_agent() is invoked with a list[Message], the start executor now populates Conversation.messages / Conversation.history / System.conversations.{id}.messages with prior turns only (excluding the latest user message), and surfaces the latest user message via Inputs.input and System.LastMessage*. This matches InvokeAzureAgent's contract that the messages binding holds prior turns and the executor itself appends the new user input before invoking, avoiding double-append of the trailing user turn while preserving full history (incl. assistant/system/tool roles and multi-modal content) for downstream actions. * Coerce Enum values when serializing PowerFx symbols MessageRole and other str-subclass Enums passed isinstance(v, str) and were forwarded to pythonnet unchanged. pythonnet then raised 'MessageRole value cannot be converted to System.String' for every PowerFx primitive when ConditionGroup/Expr eval walked the symbol table containing Conversation.messages. Reduce Enum members to their underlying value before the primitive check so eval sees plain strings/ints. * Foundry hosting: pass full conversation history to workflow agents _handle_inner_workflow only forwarded the latest user turn to WorkflowAgent.run, even though _handle_inner_agent already prepends history fetched from Foundry storage to the messages it sends a regular agent. Declarative workflows reset Conversation.messages on every run (state.initialize), so checkpoint replay alone does not give them prior turns - the host has to pass them in, the same way it does for non-workflow agents. Mirror that contract: fetch context.get_history() and pass [*history, *input_messages] to the workflow agent. * feat(workflows): support combined message + checkpoint_id for multi-turn continuation Allow Workflow.run(message=..., checkpoint_id=...) so callers can restore prior workflow state from a checkpoint AND deliver a new message to the start executor in a single call. The existing reset_context logic already preserves shared state when checkpoint_id is set, so this gives us 'fresh start executor invocation with prior state intact' - exactly what hosted multi-turn declarative workflows need. - _workflow.py: drop the message+checkpoint_id mutual exclusion and update _execute_with_message_or_checkpoint to do both (restore then execute) when both are provided. - _agent.py: in _run_core's checkpoint branch, also forward input_messages so WorkflowAgent.run(messages, checkpoint_id=...) works end-to-end. Falls back to the legacy 'restore only' behavior when messages are absent. - _declarative_base.py: detect continuation in _ensure_state_initialized by checking whether DECLARATIVE_STATE_KEY already exists in shared state; if so, refresh inputs/LastMessage* and append non-user trigger messages instead of calling state.initialize() (which would wipe Conversation/Local/System). - foundry_hosting/_responses.py: collapse the host's two-call pattern (restore-only, then fresh run) into a single combined call now that the underlying APIs support it. - tests: drop the assertion that combined message+checkpoint_id raises. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Pivot: preserve workflow state across run() calls Replace the prior 'combined message + checkpoint_id in one run()' approach with a cleaner default: Workflow.run no longer wipes shared state or runner- context messages between calls. Iteration counting and per-run kwargs still reset on a fresh-message run; checkpoint and responses runs are continuations that preserve everything. This lets a WorkflowAgent be invoked repeatedly on the same instance and maintain multi-turn context (e.g. accumulated Conversation.messages) without asking developers to opt in. Hosted-agent multi-turn pattern becomes two explicit calls: restore-from-checkpoint (drive to idle), then run-with-message. Key changes: - _workflow.py: drop _state.clear() and reset_for_new_run() from run(). Reset iteration count and run kwargs on fresh-message runs only. Restore 'Cannot provide both message and checkpoint_id' validation. Add async guard: fresh-message run with un-drained pending executor messages from a prior run is invalid. - _runner.py: clear _state before import_state in restore_from_checkpoint so restore is authoritative (import_state merges, not replaces). - _agent.py: revert checkpoint branch to restore-only (no message forward). - _responses.py (foundry_hosting): two-call host pattern - restore checkpoint silently, then run with new user input. - tests: state-preservation is the new default; rebuild Workflow for clean slate. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Fix CI lint and mypy issues from prior pivot commit - _workflow.py: collapse nested if (SIM102), drop redundant assignment (RET504) - _declarative_base.py: remove unused last_user_msg = tail assignment whose Message | None type clashed with the prior Message-typed branch Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Address PR review: fix Inputs.input update and checkpoint storage path - _declarative_base.py: continuation branch was writing 'Inputs.input' via state.set, which routes to the Custom namespace and never updates the PowerFx-visible Workflow.Inputs.input. Update state_data['Inputs'] in place via get_state_data / set_state_data so =Workflow.Inputs.input and =inputs.input see the new turn's user text on continuation. - _declarative_base.py: refresh docstring to clarify that on a list[Message] trigger, Conversation.messages excludes the current user message at the start of the turn (agent executors append it before invoking the inner agent). - _responses.py: when previous_response_id is supplied (no conversation_id), the prior checkpoint lives under <storage>/<previous_response_id> but new checkpoints must land under <storage>/<current_response_id> for the next turn to find them. Hold onto restore_storage from the get_latest lookup and pass it to the restore-only run; pass write_storage (current id) to the message-delivery run and to checkpoint cleanup. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Fix pyright errors in _declarative_base.py for CI - Replace state._state.get(...) protected access with new public is_initialized() method on DeclarativeWorkflowState (also clearer intent for the continuation detection use case). - Add narrow pyright ignores for the Any-typed trigger paths that pyright cannot fully narrow (the list[Message] isinstance loop and the fallback-DefaultTransform branch). Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Address Copilot review batch: tests + Workflow.reset escape hatch * Add Workflow.reset() public method as recovery escape hatch when an in-flight run aborted (e.g. WorkflowConvergenceException) and the workflow is not checkpointed. Update the in-flight messages guard's error message to point callers at it. * Add test_workflow_run_inflight_messages_guard exercising both the guard (sync + streaming) and the reset() recovery path. * Add test_workflow_reset_rejects_concurrent_runs to lock down the in-progress guard on reset. * Add test_as_agent_continuation_preserves_prior_state covering the is_continuation branch in _ensure_state_initialized: stamps a marker between calls and asserts it survives, while Inputs.input and System.LastMessageText refresh to the new turn. * Add test_powerfx_safe.py regression tests for the Enum branch in _make_powerfx_safe (str-subclass, int-subclass, plain Enum, and Enums nested in dict/list). * Drop redundant @pytest.mark.asyncio on test_as_agent_round_trip_with_last_message_text (asyncio_mode='auto'). Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Skip restore-only pre-pass when checkpoint has pending request_info Address Copilot review on _responses.py: the restore-only checkpoint replay populates self._agent.pending_requests for any request_info events captured in the checkpoint. The follow-up run(input_messages) call would then route through WorkflowAgent._process_pending_requests, which expects function-response content and rejects plain text input as 'unexpected content while awaiting request info responses'. Workflows resumed from a checkpoint that was idle-with-pending-requests would therefore fail every subsequent plain-text user turn. Inspect the loaded checkpoint and skip the pre-pass when its pending_request_info_events dict is non-empty. Workflows that don't use request_info (the current sample set) are unaffected; workflows that do will fall through to a fresh-message run rather than silently corrupting the routing state. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Loosen azure-ai-agentserver-* pins to major version The exact-version pins on azure-ai-agentserver-{core,responses,invocations} forced foundry-hosting consumers to upgrade in lockstep with every beta bump from upstream. Switch to '>=current,<next-major' so we pick up patch and feature updates within the same major series without a coordinated release. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Drop Workflow.reset(); checkpointing is the recovery path The in-flight-messages guard prevented silent misbehavior, but the companion Workflow.reset() escape hatch only cleared _messages while leaving iteration count, executor-local state, and shared State mutations in an indeterminate condition after a mid-run failure. That gave a false sense of recovery. Recovery from a mid-run failure is supported only via checkpoint restoration. Keep the guard and reframe its error message accordingly; remove reset() and its tests. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Address Tao's review on PR 5531 - Rename Workflow._run_workflow_with_tracing parameter is_fresh_message_run -> is_continuation (default False, inverted). Fresh-message turns reset per-run accounting; continuations (checkpoint restores, responses replays) preserve it. - Simplify the in-flight-messages guard: _validate_run_params already enforces that 'message' is mutually exclusive with 'checkpoint_id' and 'responses', so the additional checks were dead code. - foundry_hosting _responses: move the restore-only pre-pass above emit_created/emit_in_progress; restore is preparation, not run progress. Drop the skip-restore gate (state preservation requires unconditional restore) and instead clear agent.pending_requests after the restore-only call. Collapse over-conditioned check. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Don't clear pending_requests after restore-only pre-pass Pending requests in the restored checkpoint represent genuinely outstanding HITL requests. The next user input may carry function responses (Responses API `function_call_output` items become FunctionResultContent / FunctionApprovalResponseContent), which `WorkflowAgent._process_pending_requests` correctly extracts and matches against the populated `pending_requests`. Clearing them after restore would silently drop that state and force the next turn to be treated as a fresh input even when the caller is responding to the outstanding requests. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --------- Co-authored-by: alliscode <bentho@microsoft.com> Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> Co-authored-by: Evan Mattson <35585003+moonbox3@users.noreply.github.com>
1 parent 866a325 commit 8b71f94

11 files changed

Lines changed: 512 additions & 85 deletions

File tree

python/packages/core/agent_framework/_workflows/_agent.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -437,6 +437,13 @@ async def _run_core(
437437
yield event
438438

439439
elif checkpoint_id is not None:
440+
# Restore the prior workflow state from the checkpoint. Shared
441+
# state (e.g. accumulated conversation history maintained by the
442+
# workflow's executors) survives across turns because Workflow.run
443+
# no longer wipes state per call. Callers who want to deliver a
444+
# new user message after restore should make a second
445+
# `workflow.run(message=...)` call - they are NOT mutually
446+
# exclusive on the same instance, but each must be its own call.
440447
if streaming:
441448
async for event in self.workflow.run(
442449
stream=True,

python/packages/core/agent_framework/_workflows/_runner.py

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -278,7 +278,12 @@ async def restore_from_checkpoint(
278278
"Please rebuild the original workflow before resuming."
279279
)
280280

281-
# Restore state
281+
# Restore state. Clear first so import_state (which merges) does
282+
# not leak stale keys from a prior run on this Workflow instance.
283+
# This matters more now that Workflow.run() no longer wipes state
284+
# per call - the only reset point for shared state on a reused
285+
# instance is at restore time.
286+
self._state.clear()
282287
self._state.import_state(checkpoint.state)
283288
# Restore executor states using the restored state
284289
await self._restore_executor_states()

python/packages/core/agent_framework/_workflows/_workflow.py

Lines changed: 58 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -299,7 +299,7 @@ def get_executors_list(self) -> list[Executor]:
299299
async def _run_workflow_with_tracing(
300300
self,
301301
initial_executor_fn: Callable[[], Awaitable[None]] | None = None,
302-
reset_context: bool = True,
302+
is_continuation: bool = False,
303303
streaming: bool = False,
304304
function_invocation_kwargs: Mapping[str, Mapping[str, Any]] | Mapping[str, Any] | None = None,
305305
client_kwargs: Mapping[str, Mapping[str, Any]] | Mapping[str, Any] | None = None,
@@ -310,13 +310,19 @@ async def _run_workflow_with_tracing(
310310
of external callers to maintain context across different workflow runs.
311311
312312
Args:
313-
initial_executor_fn: Optional function to execute initial executor
314-
reset_context: Whether to reset the context for a new run
315-
streaming: Whether to enable streaming mode for agents
313+
initial_executor_fn: Optional function to execute initial executor.
314+
is_continuation: True when this run is a continuation of prior
315+
work (a checkpoint restore or a responses-only replay) rather
316+
than a fresh new turn delivered via the start executor with
317+
``message=...``. Continuations preserve per-run accounting
318+
(iteration counter and run kwargs) from the prior turn;
319+
fresh-message runs reset them. Shared workflow state is
320+
preserved in both cases.
321+
streaming: Whether to enable streaming mode for agents.
316322
function_invocation_kwargs: Optional kwargs to store in State for function
317-
invocations in subagents
323+
invocations in subagents.
318324
client_kwargs: Optional kwargs to store in State for chat client
319-
invocations in subagents
325+
invocations in subagents.
320326
321327
Yields:
322328
WorkflowEvent: The events generated during the workflow execution.
@@ -345,16 +351,26 @@ async def _run_workflow_with_tracing(
345351
in_progress = WorkflowEvent.status(WorkflowRunState.IN_PROGRESS)
346352
yield in_progress # noqa: RUF070
347353

348-
# Reset context for a new run if supported
349-
if reset_context:
354+
# Per-run reset for fresh-message runs only. We deliberately
355+
# do NOT clear shared workflow state (`_state.clear()`) or the
356+
# runner context's in-flight messages (`reset_for_new_run()`)
357+
# here - state and pending work persist across `run()` calls
358+
# so that a `WorkflowAgent` can deliver multi-turn input on
359+
# the same instance and have prior turns' context survive.
360+
# Iteration counting and per-run kwargs ARE per-run though,
361+
# so they're reset here.
362+
if not is_continuation:
350363
self._runner.reset_iteration_count()
351-
self._runner.context.reset_for_new_run()
352-
self._state.clear()
353364

354365
# Store run kwargs in State so executors can access them.
355-
# Only overwrite when new kwargs are explicitly provided or state was
356-
# just cleared (fresh run). On continuation (reset_context=False) with
357-
# no new kwargs, preserve the kwargs from the original run.
366+
# Per-run kwargs semantics:
367+
# - On a fresh message run, prior kwargs go away (set to {}
368+
# by default, or to the new kwargs if provided). This
369+
# prevents stale kwargs from a prior turn leaking into the
370+
# current turn.
371+
# - On a continuation (checkpoint restore or responses), the
372+
# prior run's kwargs are preserved unless the caller
373+
# explicitly provides new kwargs.
358374
if function_invocation_kwargs is not None or client_kwargs is not None:
359375
combined_kwargs: dict[str, Any] = {}
360376
if function_invocation_kwargs is not None:
@@ -366,11 +382,12 @@ async def _run_workflow_with_tracing(
366382
client_kwargs, "client_kwargs"
367383
)
368384
self._state.set(WORKFLOW_RUN_KWARGS_KEY, combined_kwargs)
369-
elif reset_context:
385+
elif not is_continuation:
370386
self._state.set(WORKFLOW_RUN_KWARGS_KEY, {})
371387
self._state.commit() # Commit immediately so kwargs are available
372388

373-
# Set streaming mode after reset
389+
# Set streaming mode (always set explicitly per run since
390+
# reset_for_new_run() no longer runs to clear it).
374391
self._runner_context.set_streaming(streaming)
375392

376393
# Execute initial setup if provided
@@ -585,13 +602,33 @@ async def _run_core(
585602
if checkpoint_storage is not None:
586603
self._runner.context.set_runtime_checkpoint_storage(checkpoint_storage)
587604

588-
initial_executor_fn, reset_context = self._resolve_execution_mode(
605+
# Async validation: a fresh-message run is only allowed when the
606+
# runner context has fully drained from any prior run. If it still
607+
# has in-flight executor messages, the prior run didn't complete -
608+
# the caller must either resume from a checkpoint or wait for the
609+
# prior run to drain. (Pending request_info events are intentionally
610+
# NOT blocked here: a follow-up run with message=... is the normal
611+
# way to deliver a response to those pending requests, e.g. via
612+
# WorkflowAgent._process_pending_requests.)
613+
# NOTE: _validate_run_params already enforces that ``message`` is
614+
# mutually exclusive with both ``checkpoint_id`` and ``responses``,
615+
# so we don't need to re-check those here.
616+
if message is not None and await self._runner.context.has_messages():
617+
raise RuntimeError(
618+
"Cannot start a new run with 'message' while in-flight executor "
619+
"messages remain from a prior run. Resume from a checkpoint "
620+
"(checkpoint_id=...) or wait for the prior run to complete. "
621+
"Workflows that need to recover from a mid-run failure must use "
622+
"checkpointing; there is no in-process recovery path."
623+
)
624+
625+
initial_executor_fn = self._resolve_execution_mode(
589626
message, responses, checkpoint_id, checkpoint_storage
590627
)
591628

592629
async for event in self._run_workflow_with_tracing(
593630
initial_executor_fn=initial_executor_fn,
594-
reset_context=reset_context,
631+
is_continuation=(message is None),
595632
streaming=streaming,
596633
function_invocation_kwargs=function_invocation_kwargs,
597634
client_kwargs=client_kwargs,
@@ -674,12 +711,8 @@ def _resolve_execution_mode(
674711
responses: Mapping[str, Any] | None,
675712
checkpoint_id: str | None,
676713
checkpoint_storage: CheckpointStorage | None,
677-
) -> tuple[Callable[[], Awaitable[None]], bool]:
678-
"""Determine the initial executor function and reset_context flag based on parameters.
679-
680-
Returns:
681-
A tuple of (initial_executor_fn, reset_context).
682-
"""
714+
) -> Callable[[], Awaitable[None]]:
715+
"""Determine the initial executor function based on parameters."""
683716
if responses is not None:
684717
if checkpoint_id is not None:
685718
# Combined: restore checkpoint then send responses
@@ -689,13 +722,11 @@ def _resolve_execution_mode(
689722
else:
690723
# Send responses only (requires pending requests in workflow state)
691724
initial_executor_fn = functools.partial(self._send_responses_internal, responses)
692-
return initial_executor_fn, False
725+
return initial_executor_fn
693726
# Regular run or checkpoint restoration
694-
initial_executor_fn = functools.partial(
727+
return functools.partial(
695728
self._execute_with_message_or_checkpoint, message, checkpoint_id, checkpoint_storage
696729
)
697-
reset_context = message is not None and checkpoint_id is None
698-
return initial_executor_fn, reset_context
699730

700731
async def _restore_and_send_responses(
701732
self,

python/packages/core/tests/workflow/test_workflow.py

Lines changed: 64 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -488,8 +488,13 @@ async def handle_message(
488488
await ctx.yield_output(existing_messages.copy()) # type: ignore
489489

490490

491-
async def test_workflow_multiple_runs_no_state_collision():
492-
"""Test that running the same workflow instance multiple times doesn't have state collision."""
491+
async def test_workflow_multiple_runs_preserve_state():
492+
"""Test that running the same workflow instance multiple times preserves shared state.
493+
494+
State preservation is the new default - calling ``Workflow.run`` repeatedly
495+
on the same instance behaves like a chat agent maintaining memory across
496+
turns. Callers that want fresh state should rebuild the Workflow.
497+
"""
493498
with tempfile.TemporaryDirectory() as temp_dir:
494499
storage = FileCheckpointStorage(temp_dir)
495500

@@ -503,29 +508,45 @@ async def test_workflow_multiple_runs_no_state_collision():
503508
.build()
504509
)
505510

506-
# Run 1: Should only see messages from run 1
511+
# Run 1: Single record from run 1
507512
result1 = await workflow.run(StateTrackingMessage(data="message1", run_id="run1"))
508513
assert result1.get_final_state() == WorkflowRunState.IDLE
509514
outputs1 = result1.get_outputs()
510515
assert outputs1[0] == ["run1:message1"]
511516

512-
# Run 2: Should only see messages from run 2, not run 1
517+
# Run 2: State from run 1 persists; run 2's record appends.
513518
result2 = await workflow.run(StateTrackingMessage(data="message2", run_id="run2"))
514519
assert result2.get_final_state() == WorkflowRunState.IDLE
515520
outputs2 = result2.get_outputs()
516-
assert outputs2[0] == ["run2:message2"] # Should NOT contain run1 data
521+
assert outputs2[0] == ["run1:message1", "run2:message2"]
517522

518-
# Run 3: Should only see messages from run 3
523+
# Run 3: Same - all three accumulate.
519524
result3 = await workflow.run(StateTrackingMessage(data="message3", run_id="run3"))
520525
assert result3.get_final_state() == WorkflowRunState.IDLE
521526
outputs3 = result3.get_outputs()
522-
assert outputs3[0] == ["run3:message3"] # Should NOT contain run1 or run2 data
527+
assert outputs3[0] == ["run1:message1", "run2:message2", "run3:message3"]
528+
529+
530+
async def test_workflow_multiple_runs_no_state_collision_after_rebuild():
531+
"""Rebuilding the Workflow gives a fresh shared-state slate."""
532+
with tempfile.TemporaryDirectory() as temp_dir:
533+
storage = FileCheckpointStorage(temp_dir)
534+
535+
def _build():
536+
executor = StateTrackingExecutor(id="state_executor")
537+
return (
538+
WorkflowBuilder(start_executor=executor, checkpoint_storage=storage)
539+
.add_edge(executor, executor)
540+
.build()
541+
)
523542

524-
# Verify that each run only processed its own message
525-
# This confirms that the checkpointable context properly resets between runs
526-
assert outputs1[0] != outputs2[0]
527-
assert outputs2[0] != outputs3[0]
528-
assert outputs1[0] != outputs3[0]
543+
wf1 = _build()
544+
result1 = await wf1.run(StateTrackingMessage(data="message1", run_id="run1"))
545+
assert result1.get_outputs()[0] == ["run1:message1"]
546+
547+
wf2 = _build()
548+
result2 = await wf2.run(StateTrackingMessage(data="message2", run_id="run2"))
549+
assert result2.get_outputs()[0] == ["run2:message2"]
529550

530551

531552
async def test_workflow_checkpoint_runtime_only_configuration(
@@ -932,6 +953,31 @@ async def test_agent_streaming_vs_non_streaming() -> None:
932953
assert accumulated_text == "Hello World", f"Expected 'Hello World', got '{accumulated_text}'"
933954

934955

956+
async def test_workflow_run_inflight_messages_guard(simple_executor: Executor) -> None:
957+
"""``run(message=...)`` must reject in-flight executor messages from a prior run.
958+
959+
Workflows preserve state and pending messages across :meth:`Workflow.run`
960+
calls. If a prior run aborted before the runner drained those pending
961+
messages (e.g. it raised :class:`WorkflowConvergenceException`), the next
962+
fresh-message call should fail loudly instead of silently mixing the
963+
leftover messages with the new turn. The supported recovery path is to
964+
resume from a checkpoint; there is no in-process recovery hatch.
965+
"""
966+
workflow = WorkflowBuilder(start_executor=simple_executor).add_edge(simple_executor, simple_executor).build()
967+
test_message = WorkflowMessage(data="test", source_id="test", target_id=None)
968+
969+
# Simulate an aborted prior run by leaving a message in the runner context.
970+
workflow._runner.context._messages["test"] = [test_message]
971+
assert await workflow._runner.context.has_messages()
972+
973+
with pytest.raises(RuntimeError, match="in-flight executor messages"):
974+
await workflow.run(test_message)
975+
976+
with pytest.raises(RuntimeError, match="in-flight executor messages"):
977+
async for _ in workflow.run(test_message, stream=True):
978+
pass
979+
980+
935981
async def test_workflow_run_parameter_validation(simple_executor: Executor) -> None:
936982
"""Test that stream properly validate parameter combinations."""
937983
workflow = WorkflowBuilder(start_executor=simple_executor).add_edge(simple_executor, simple_executor).build()
@@ -942,13 +988,15 @@ async def test_workflow_run_parameter_validation(simple_executor: Executor) -> N
942988
result = await workflow.run(test_message)
943989
assert result.get_final_state() == WorkflowRunState.IDLE
944990

945-
# Invalid: both message and checkpoint_id
991+
# Invalid: message + checkpoint_id (mutually exclusive). Multi-turn
992+
# state preservation is handled by Workflow.run preserving state across
993+
# calls, so the host pattern is two separate calls (restore-then-run),
994+
# not a single combined call.
946995
with pytest.raises(ValueError, match="Cannot provide both 'message' and 'checkpoint_id'"):
947-
await workflow.run(test_message, checkpoint_id="fake_id")
996+
await workflow.run(test_message, checkpoint_id="some-checkpoint")
948997

949-
# Invalid: both message and checkpoint_id (streaming)
950998
with pytest.raises(ValueError, match="Cannot provide both 'message' and 'checkpoint_id'"):
951-
async for _ in workflow.run(test_message, checkpoint_id="fake_id", stream=True):
999+
async for _ in workflow.run(test_message, checkpoint_id="some-checkpoint", stream=True):
9521000
pass
9531001

9541002
# Invalid: none of message or checkpoint_id

0 commit comments

Comments
 (0)