This sample shows how Durable Functions can safely process orchestration data that is larger than 1 MB when Durable Task Scheduler is configured with large payload storage.
If you want the same storage feature demonstrated with a parallel fan-out/fan-in orchestration, see the sibling LargePayloadFanOutFanIn sample. Both samples use the same DTS + blob storage configuration and deployment story; this folder is the simplest place to start.
The flow is intentionally simple:
- An HTTP trigger starts an orchestration with a payload larger than 1 MB.
- The orchestrator sends that payload to a single activity.
- The activity echoes the payload back.
- The orchestration returns a small summary proving the payload survived the round-trip.
This is exactly why large payload support exists: without blob offload, a payload this size would be too large to flow through DTS messages directly.
The sample enables these settings in host.json:
"durableTask": {
"storageProvider": {
"type": "azureManaged",
"connectionStringName": "DTS_CONNECTION_STRING",
"largePayloadStorageEnabled": true,
"largePayloadStorageThresholdBytes": 900000
},
"hubName": "%TASKHUB_NAME%"
}When a payload exceeds largePayloadStorageThresholdBytes, the Durable Functions extension:
- compresses the payload with gzip
- stores it in blob storage using
AzureWebJobsStorage - replaces the in-band DTS message with a small blob reference
- resolves that blob reference automatically before your function code reads the payload
The sample uses a deterministic, low-compressibility 1.5 MiB payload by default and a 900,000-byte threshold so externalization happens before the payload approaches the DTS 1 MiB message boundary.
- .NET 8 SDK
- Azure Functions Core Tools v4
- Docker
- Azure CLI
- Azure Developer CLI (
azd) for the Azure deployment path
-
Start the local dependencies:
docker compose up -d
-
Create
local.settings.json:{ "IsEncrypted": false, "Values": { "FUNCTIONS_WORKER_RUNTIME": "dotnet-isolated", "AzureWebJobsStorage": "UseDevelopmentStorage=true", "DTS_CONNECTION_STRING": "Endpoint=http://localhost:8080;Authentication=None", "TASKHUB_NAME": "default", "PAYLOAD_SIZE_BYTES": "1572864" } } -
Start the function app:
func start
-
Trigger the orchestration:
curl -X POST http://localhost:7071/api/StartLargePayload
-
Poll the
StatusQueryGetUrivalue from the response until the orchestration completes. The full status payload also includes the original largeinput, so focus on theruntimeStatusandoutputfields. The important part looks like this:{ "runtimeStatus": "Completed", "output": { "RequestedPayloadBytes": 1572864, "OrchestrationInputBytes": 1572864, "ActivityOutputBytes": 1572864, "ExceededOneMiB": true, "PayloadsMatch": true } } -
Verify that blob offload happened:
az storage blob list \ --connection-string "UseDevelopmentStorage=true" \ --container-name durabletask-payloads \ --output table
The extension stores payload blobs with gzip content encoding, so Azure shows the compressed on-disk size. Because this sample uses low-compressibility payload content, the stored blob sizes should stay reasonably close to the logical 1.5 MiB payload instead of collapsing to a tiny repetitive-text blob.
| Setting | Description | Default |
|---|---|---|
DTS_CONNECTION_STRING |
DTS emulator or Azure connection string | Endpoint=http://localhost:8080;Authentication=None |
TASKHUB_NAME |
Task hub name | default |
AzureWebJobsStorage |
Storage for Functions host state and payload blobs | UseDevelopmentStorage=true locally |
PAYLOAD_SIZE_BYTES |
Payload size used by the HTTP starter | 1572864 |
This sample includes azure.yaml, infra/, and deployment scripts so you can provision the DTS + Function App + Storage resources together.
-
Sign in:
az login azd auth login
-
From this sample directory, provision and deploy:
azd up
The deployment provisions:
- an Azure Function App
- a storage account used for
AzureWebJobsStorage - a Durable Task Scheduler resource and task hub
- a user-assigned managed identity with the required DTS and storage permissions
-
Load the environment values:
eval "$(azd env get-values)"
-
Start the orchestration in Azure:
curl -X POST "https://${AZURE_FUNCTION_NAME}.azurewebsites.net/api/StartLargePayload" -
Verify payload blobs in Azure storage:
az storage blob list \ --account-name $AZURE_STORAGE_ACCOUNT_NAME \ --container-name durabletask-payloads \ --auth-mode login \ --output table -
Open the DTS dashboard URL in the Azure portal to inspect the orchestration history.
LargePayloadOrchestration.cs— HTTP starter, orchestrator, and echo activityhost.json— Durable Task Scheduler + large payload configurationdocker-compose.yml— local DTS emulator + Azurite dependenciesazure.yamlandinfra/— Azure deployment path
Stop the local containers:
docker compose downDelete the Azure environment when you are done:
azd down --purge