You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/API.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1463,6 +1463,7 @@ starts a free compute job and returns jobId if succesfull
1463
1463
| additionalViewers | object || optional array of addresses that are allowed to fetch the result |
1464
1464
| queueMaxWaitTime | number || optional max time in seconds a job can wait in the queue before being started |
1465
1465
| encryptedDockerRegistryAuth | string || Ecies encrypted docker auth schema for image (see [Private Docker Registries with Per-Job Authentication](../env.md#private-docker-registries-with-per-job-authentication)) |
1466
+
| output | string || Ecies encrypted with instructions for uploading compute results (see [C2D result upload to remote storage](../Storage.md#c2d-result-upload-to-remote-storage)) |
|`objectKey`| Yes | Object key (path within the bucket) |
150
+
|`accessKeyId`| Yes | Access key for the S3-compatible API |
151
+
|`secretAccessKey`| Yes | Secret key for the S3-compatible API |
152
+
|`region`| No | Region (e.g. `us-east-1`). Optional; defaults to `us-east-1` if omitted. Some backends (e.g. Ceph) may ignore it. |
153
153
|`forcePathStyle`| No | If `true`, use path-style addressing (e.g. `endpoint/bucket/key`). Required for some S3-compatible services (e.g. MinIO). Default `false` (virtual-host style, e.g. `bucket.endpoint/key`, standard for AWS S3). |
|`url`| Yes | Full FTP or FTPS URL. Supports `ftp://` and `ftps://`. May include credentials as `ftp://user:password@host:port/path`. Default port is 21 for FTP and 990 for FTPS. |
193
193
194
194
### Validation
@@ -207,6 +207,113 @@ FTPStorage supports `upload(filename, stream)`. If the file object’s `url` end
207
207
208
208
---
209
209
210
+
## C2D result upload to remote storage
211
+
212
+
Compute-to-Data jobs can upload their output archive to a remote backend instead of keeping it only on local node disk.
213
+
214
+
### How it works
215
+
216
+
1. You build a `ComputeOutput` JSON object with:
217
+
-`remoteStorage`: one of the storage objects from this document (`url`, `s3`, `ftp`, etc.)
218
+
- optional `encryption`: currently only `AES` is accepted, with a hex key
219
+
2. You ECIES-encrypt that JSON into a string and send it in the compute command as `output`.
220
+
3. When the job finishes:
221
+
- if `output` is present and remote storage supports upload, Ocean Node uploads the tar archive remotely
222
+
- otherwise, Ocean Node falls back to local `outputs.tar` behavior
0 commit comments