This sample demonstrates the monitoring pattern with the Azure Durable Task Scheduler using the Python SDK. The monitoring pattern is used for periodically checking the status of a long-running operation until it completes or times out.
In this sample:
- The orchestrator starts monitoring a job with a specified ID
- It periodically calls the
check_job_statusactivity at defined intervals - The current job status is exposed via custom status, making it available to clients
- Monitoring continues until either:
- The job completes successfully
- The specified timeout period is reached
This pattern is useful for:
- Polling external services or APIs that don't support callbacks
- Checking the status of long-running operations
- Implementing timeout mechanisms for operations with unpredictable durations
- Maintaining state about progress of a workflow
- Python 3.9+
- Docker (for running the emulator) installed
- Azure CLI (if using a deployed Durable Task Scheduler)
There are two ways to run this sample locally:
The emulator simulates a scheduler and taskhub in a Docker container, making it ideal for development and learning.
- Pull the Docker Image for the Emulator:
docker pull mcr.microsoft.com/dts/dts-emulator:latest- Run the Emulator:
docker run --name dtsemulator -d -p 8080:8080 -p 8082:8082 mcr.microsoft.com/dts/dts-emulator:latestWait a few seconds for the container to be ready.
Note: The example code automatically uses the default emulator settings (endpoint: http://localhost:8080, taskhub: default). You don't need to set any environment variables.
Local development with a deployed scheduler:
-
Install the durable task scheduler CLI extension:
az upgrade az extension add --name durabletask --allow-preview true -
Create a resource group in a region where the Durable Task Scheduler is available:
az provider show --namespace Microsoft.DurableTask --query "resourceTypes[?resourceType=='schedulers'].locations | [0]" --out tableaz group create --name my-resource-group --location <location>
-
Create a durable task scheduler resource:
az durabletask scheduler create \ --resource-group my-resource-group \ --name my-scheduler \ --ip-allowlist '["0.0.0.0/0"]' \ --sku-name "Dedicated" \ --sku-capacity 1 \ --tags "{'myattribute':'myvalue'}" -
Create a task hub within the scheduler resource:
az durabletask taskhub create \ --resource-group my-resource-group \ --scheduler-name my-scheduler \ --name "my-taskhub" -
Grant the current user permission to connect to the
my-taskhubtask hub:subscriptionId=$(az account show --query "id" -o tsv) loggedInUser=$(az account show --query "user.name" -o tsv) az role assignment create \ --assignee $loggedInUser \ --role "Durable Task Data Contributor" \ --scope "/subscriptions/$subscriptionId/resourceGroups/my-resource-group/providers/Microsoft.DurableTask/schedulers/my-scheduler/taskHubs/my-taskhub"
Once you have set up either the emulator or deployed scheduler, follow these steps to run the sample:
- First, activate your Python virtual environment (if you're using one):
python -m venv venv
source venv/bin/activate # On Windows, use: venv\Scripts\activate- If you're using a deployed scheduler, you need set Environment Variables:
export ENDPOINT=$(az durabletask scheduler show \
--resource-group my-resource-group \
--name my-scheduler \
--query "properties.endpoint" \
--output tsv)
export TASKHUB="my-taskhub"-
Install the required packages:
pip install -r requirements.txt
-
Start the worker in a terminal:
python worker.py
You should see output indicating the worker has started and registered the orchestration and activities.
-
In a new terminal (with the virtual environment activated if applicable), run the client:
Note: Remember to set the environment variables again if you're using a deployed scheduler.
python client.py [job_id] [polling_interval] [timeout]You can optionally provide:
job_id: A unique identifier for the job (defaults to a random UUID)polling_interval: How often to check job status in seconds (defaults to 5)timeout: Maximum monitoring duration in seconds (defaults to 30)
Learn how to set up identity-based authentication when you deploy the app Azure.
When you run the sample, you'll see output from both the worker and client processes:
The worker shows:
- Registration of the orchestrator and activities
- Starting the monitoring orchestration with the specified parameters
- Periodic log entries when the
check_job_statusactivity is called - Status updates as the check count increases
- A message when monitoring completes or times out
The client shows:
- Starting the monitoring orchestration for the specified job
- Real-time status updates as they occur (via custom status)
- Status changes from "Unknown" to "Running" and finally to "Completed"
- The final result, including:
- Job ID
- Final status
- Number of status checks performed
- Total monitoring duration
The sample demonstrates a job that completes after 3 status checks, but in a real application, the check_job_status activity would typically call an external service to determine the actual status.
To access the Durable Task Scheduler Dashboard and review your orchestration:
- Navigate to http://localhost:8082 in your web browser
- Click on the "default" task hub
- You'll see the orchestration instance in the list
- Click on the instance ID to view the execution details, which will show:
- The periodic calls to the
check_job_statusactivity - The timers created between checks
- The custom status updates at each step
- The final result when monitoring completes
- The periodic calls to the
- Navigate to the Scheduler resource in the Azure portal
- Go to the Task Hub subresource that you're using
- Click on the dashboard URL in the top right corner
- Search for your orchestration instance ID
- Review the execution details
The dashboard visualizes the polling pattern, showing how the orchestrator alternates between checking status and waiting, and how it uses timers to implement the polling interval.