You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: 2.0/llms-full.txt
+237-1Lines changed: 237 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -239,6 +239,8 @@ class MyActivity extends Activity
239
239
240
240
In general, you should only pass small amounts of data in this manner. Rather than passing large amounts of data, you should write the data to the database, cache or file system. Then pass the key or file path to the workflow and activities. The activities can then use the key or file path to read the data.
241
241
242
+
When the application genuinely needs to carry large bytes through history — documents, media blobs, serialized exports — enable [External Payload Storage](../features/external-payload-storage.md) on the namespace. The runtime offloads over-threshold payloads to a configured object store and records a verifiable reference envelope in history, keeping replay integrity while staying under the [`payload_size_bytes` structural limit](../constraints/structural-limits.md#payload-size).
243
+
242
244
## Output
243
245
244
246
Once the workflow has completed, you can retrieve the output using the `output()` method.
To work within the limit, store large data externally and pass a reference:
892
+
To work within the limit, either enable
893
+
[External Payload Storage](../features/external-payload-storage.md) on the
894
+
namespace so the runtime transparently offloads over-threshold payloads to a
895
+
configured object store, or store the bytes yourself and pass an
896
+
application-level reference:
891
897
892
898
```php
893
899
$ref = Storage::put('docs/incoming.pdf', $blob);
894
900
activity(ProcessDocumentActivity::class, $ref);
895
901
```
896
902
903
+
External payload storage preserves replay integrity by recording a hashed
904
+
`durable-workflow.v2.external-payload-reference.v1` envelope in history, so
905
+
the reference envelope becomes the payload the limit sees — not the bytes.
906
+
897
907
### Memo size
898
908
899
909
When a workflow upserts memo entries via `upsertMemo()`, the executor merges the new entries into the existing memo map, then JSON-encodes the merged result and checks the byte length against `memo_size_bytes`. If the merged memo exceeds the limit, the run fails before the memo is persisted.
@@ -5215,6 +5225,232 @@ The schedule table (`workflow_schedules`) is created by migration `2026_04_14_00
5215
5225
5216
5226
If your deployment runs package migrations alongside application migrations, migration 157 detects a pre-existing `workflow_schedules` table and handles it gracefully: if the table already matches the package schema it is left as-is; if it was created by an earlier shim migration with a different schema, it is replaced.
| `local` | `file://` | Local development, CI, and single-node deployments where the server and workers share a filesystem. Not suitable when workers run on different hosts than the server. |
5295
+
| `s3` | `s3://` | Amazon S3 and S3-compatible object stores (MinIO, Cloudflare R2, etc.) through a server-side filesystem disk. |
5296
+
| `gcs` | `gs://` | Google Cloud Storage through a server-side filesystem disk. |
5297
+
| `azure` | `azure://` | Azure Blob Storage through a server-side filesystem disk. |
5298
+
5299
+
Object-store drivers configure the actual bucket/container credentials
5300
+
through a named server-side filesystem disk, so secrets live in the server's
5301
+
configuration rather than in the namespace policy record.
5302
+
5303
+
## Configuring A Namespace
5304
+
5305
+
Configure the policy with the [CLI](../polyglot/cli-reference.md#namespace-and-search-attribute-commands)
5306
+
or the [server HTTP API](../polyglot/server-api-reference.md#namespace-and-storage).
5307
+
Both write the same `external_payload_storage` envelope on the namespace
5308
+
record.
5309
+
5310
+
### With The CLI
5311
+
5312
+
```bash
5313
+
# Production namespace using Amazon S3 through the 'external-payload-objects' disk.
5314
+
dw namespace:set-storage-driver billing s3 \
5315
+
--disk=external-payload-objects \
5316
+
--bucket=dw-payloads \
5317
+
--prefix=billing/ \
5318
+
--threshold-bytes=2097152
5319
+
5320
+
# Development namespace using the local filesystem.
5321
+
dw namespace:set-storage-driver dev local \
5322
+
--uri=file:///var/lib/durable-workflow/payloads
5323
+
5324
+
# Disable offload while keeping the policy record (all payloads stay inline).
5325
+
dw namespace:set-storage-driver billing s3 \
5326
+
--disk=external-payload-objects \
5327
+
--bucket=dw-payloads \
5328
+
--disable
5329
+
```
5330
+
5331
+
### With The Server API
5332
+
5333
+
```bash
5334
+
curl -sS -X PUT "$DURABLE_WORKFLOW_SERVER_URL/api/namespaces/billing/external-storage" \
0 commit comments