Skip to content

Commit f295836

Browse files
[Docs] Document dstack offer (#2546)
1 parent e238c0d commit f295836

8 files changed

Lines changed: 204 additions & 3 deletions

File tree

docs/docs/concepts/dev-environments.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -212,6 +212,9 @@ and their quantity. Examples: `nvidia` (one NVIDIA GPU), `A100` (one A100), `A10
212212
If you are using parallel communicating processes (e.g., dataloaders in PyTorch), you may need to configure
213213
`shm_size`, e.g. set it to `16GB`.
214214

215+
> If you’re unsure which offers (hardware configurations) are available from the configured backends, use the
216+
> [`dstack offer`](../reference/cli/dstack/offer.md#list-gpu-offers) command to list them.
217+
215218
### Python version
216219

217220
If you don't specify `image`, `dstack` uses its base Docker image pre-configured with

docs/docs/concepts/fleets.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -121,6 +121,9 @@ and their quantity. Examples: `nvidia` (one NVIDIA GPU), `A100` (one A100), `A10
121121

122122
Currently, only 8 TPU cores can be specified, supporting single TPU device workloads. Multi-TPU support is coming soon.
123123

124+
> If you’re unsure which offers (hardware configurations) are available from the configured backends, use the
125+
> [`dstack offer`](../reference/cli/dstack/offer.md#list-gpu-offers) command to list them.
126+
124127
#### Blocks { #cloud-blocks }
125128

126129
For cloud fleets, `blocks` function the same way as in SSH fleets.

docs/docs/concepts/services.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -359,6 +359,9 @@ and their quantity. Examples: `nvidia` (one NVIDIA GPU), `A100` (one A100), `A10
359359
If you are using parallel communicating processes (e.g., dataloaders in PyTorch), you may need to configure
360360
`shm_size`, e.g. set it to `16GB`.
361361

362+
> If you’re unsure which offers (hardware configurations) are available from the configured backends, use the
363+
> [`dstack offer`](../reference/cli/dstack/offer.md#list-gpu-offers) command to list them.
364+
362365
### Python version
363366

364367
If you don't specify `image`, `dstack` uses its base Docker image pre-configured with

docs/docs/concepts/tasks.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -234,6 +234,9 @@ and their quantity. Examples: `nvidia` (one NVIDIA GPU), `A100` (one A100), `A10
234234
If you are using parallel communicating processes (e.g., dataloaders in PyTorch), you may need to configure
235235
`shm_size`, e.g. set it to `16GB`.
236236

237+
> If you’re unsure which offers (hardware configurations) are available from the configured backends, use the
238+
> [`dstack offer`](../reference/cli/dstack/offer.md#list-gpu-offers) command to list them.
239+
237240
### Python version
238241

239242
If you don't specify `image`, `dstack` uses its base Docker image pre-configured with

docs/docs/guides/protips.md

Lines changed: 43 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -274,13 +274,24 @@ To run in detached mode, use `-d` with `dstack apply`.
274274

275275
> If you detached the CLI, you can always re-attach to a run via [`dstack attach`](../reference/cli/dstack/attach.md).
276276
277-
## GPU
277+
## GPU specification
278278

279279
`dstack` natively supports NVIDIA GPU, AMD GPU, and Google Cloud TPU accelerator chips.
280280

281-
The `gpu` property within [`resources`](../reference/dstack.yml/dev-environment.md#resources) (or the `--gpu` option with `dstack apply`)
281+
The `gpu` property within [`resources`](../reference/dstack.yml/dev-environment.md#resources) (or the `--gpu` option with [`dstack apply`](../reference/cli/dstack/apply.md) or
282+
[`dstack offer`](../reference/cli/dstack/offer.md))
282283
allows specifying not only memory size but also GPU vendor, names, their memory, and quantity.
283284

285+
The general format is: `<vendor>:<comma-sparated names>:<memory range>:<quantity range>`.
286+
287+
Each component is optional.
288+
289+
Ranges can be:
290+
291+
* **Closed** (e.g. `24GB..80GB` or `1..8`)
292+
* **Open** (e.g. `24GB..` or `1..`)
293+
* **Single values** (e.g. `1` or `24GB`).
294+
284295
Examples:
285296

286297
- `1` (any GPU)
@@ -308,7 +319,36 @@ The GPU vendor is indicated by one of the following case-insensitive values:
308319
Currently, you can't specify other than 8 TPU cores. This means only single host workloads are supported.
309320
Support for multiple hosts is coming soon.
310321

311-
## Monitoring metrics
322+
## Offers
323+
324+
If you're not sure which offers (hardware configurations) are available with the configured backends, use the
325+
[`dstack offer`](../reference/cli/dstack/offer.md#list-gpu-offers) command.
326+
327+
<div class="termy">
328+
329+
```shell
330+
$ dstack offer --gpu H100:1.. --max-offers 10
331+
Getting offers...
332+
---> 100%
333+
334+
# BACKEND REGION INSTANCE TYPE RESOURCES SPOT PRICE
335+
1 datacrunch FIN-01 1H100.80S.30V 30xCPU, 120GB, 1xH100 (80GB), 100.0GB (disk) no $2.19
336+
2 datacrunch FIN-02 1H100.80S.30V 30xCPU, 120GB, 1xH100 (80GB), 100.0GB (disk) no $2.19
337+
3 datacrunch FIN-02 1H100.80S.32V 32xCPU, 185GB, 1xH100 (80GB), 100.0GB (disk) no $2.19
338+
4 datacrunch ICE-01 1H100.80S.32V 32xCPU, 185GB, 1xH100 (80GB), 100.0GB (disk) no $2.19
339+
5 runpod US-KS-2 NVIDIA H100 PCIe 16xCPU, 251GB, 1xH100 (80GB), 100.0GB (disk) no $2.39
340+
6 runpod CA NVIDIA H100 80GB HBM3 24xCPU, 251GB, 1xH100 (80GB), 100.0GB (disk) no $2.69
341+
7 nebius eu-north1 gpu-h100-sxm 16xCPU, 200GB, 1xH100 (80GB), 100.0GB (disk) no $2.95
342+
8 runpod AP-JP-1 NVIDIA H100 80GB HBM3 20xCPU, 251GB, 1xH100 (80GB), 100.0GB (disk) no $2.99
343+
9 runpod CA-MTL-1 NVIDIA H100 80GB HBM3 28xCPU, 251GB, 1xH100 (80GB), 100.0GB (disk) no $2.99
344+
10 runpod CA-MTL-2 NVIDIA H100 80GB HBM3 26xCPU, 125GB, 1xH100 (80GB), 100.0GB (disk) no $2.99
345+
...
346+
Shown 10 of 99 offers, $127.816 max
347+
```
348+
349+
</div>
350+
351+
## Metrics
312352

313353
While `dstack` allows the use of any third-party monitoring tools (e.g., Weights and Biases), you can also
314354
monitor container metrics such as CPU, memory, and GPU usage using the [built-in

docs/docs/installation/index.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -80,6 +80,8 @@ The server can run on your laptop or any environment with access to the cloud an
8080
8181
</div>
8282

83+
To verify that backends are properly configured, use the [`dstack offer`](../reference/cli/dstack/offer.md#list-gpu-offers) command to list available GPU offers.
84+
8385
!!! info "Server deployment"
8486
For more details on server deployment options, see the
8587
[Server deployment](../guides/server-deployment.md) guide.
Lines changed: 146 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,146 @@
1+
# dstack offer
2+
3+
Displays available offers (hardware configurations) with the configured backends (or offers that match already provisioned fleets).
4+
5+
The output includes backend, region, instance type, resources, spot availability, and pricing details.
6+
7+
## Usage
8+
9+
This command accepts most of the same arguments as [`dstack apply`](apply.md).
10+
11+
<div class="termy">
12+
13+
```shell
14+
$ dstack offer --help
15+
#GENERATE#
16+
```
17+
18+
</div>
19+
20+
## Examples
21+
22+
### List GPU offers
23+
24+
The `--gpu` flag accepts the same specification format as the `gpu` property in [`dev environment`](../../../concepts/dev-environments.md), [`task`](../../../concepts/tasks.md),
25+
[`service`](../../../concepts/services.md), and [`fleet`](../../../concepts/fleets.md) configurations.
26+
27+
The general format is: `<vendor>:<comma-sparated names>:<memory range>:<quantity range>`.
28+
29+
Each component is optional.
30+
31+
Ranges can be:
32+
33+
* **Closed** (e.g. `24GB..80GB` or `1..8`)
34+
* **Open** (e.g. `24GB..` or `1..`)
35+
* **Single values** (e.g. `1` or `24GB`).
36+
37+
Examples:
38+
39+
* `--gpu nvidia` (any NVIDIA GPU)
40+
* `--gpu nvidia:1..8` (from one to eigth NVIDIA GPUs)
41+
* `--gpu A10,A100` (single NVIDIA A10 or A100 GPU)
42+
* `--gpu A100:80GB` (single NVIDIA A100 with 80GB vRAM)
43+
* `--gpu 24GB..80GB` (any GPU with 24GB to 80GB vRAM)
44+
45+
<!-- TODO: Mention TPU -->
46+
<!-- TODO: For TPU: support https://github.com/dstackai/dstack/issues/2154 -->
47+
48+
The following example lists offers with one or more H100 GPUs:
49+
50+
<div class="termy">
51+
52+
```shell
53+
$ dstack offer --gpu H100:1.. --max-offers 10
54+
Getting offers...
55+
---> 100%
56+
57+
# BACKEND REGION INSTANCE TYPE RESOURCES SPOT PRICE
58+
1 datacrunch FIN-01 1H100.80S.30V 30xCPU, 120GB, 1xH100 (80GB), 100.0GB (disk) no $2.19
59+
2 datacrunch FIN-02 1H100.80S.30V 30xCPU, 120GB, 1xH100 (80GB), 100.0GB (disk) no $2.19
60+
3 datacrunch FIN-02 1H100.80S.32V 32xCPU, 185GB, 1xH100 (80GB), 100.0GB (disk) no $2.19
61+
4 datacrunch ICE-01 1H100.80S.32V 32xCPU, 185GB, 1xH100 (80GB), 100.0GB (disk) no $2.19
62+
5 runpod US-KS-2 NVIDIA H100 PCIe 16xCPU, 251GB, 1xH100 (80GB), 100.0GB (disk) no $2.39
63+
6 runpod CA NVIDIA H100 80GB HBM3 24xCPU, 251GB, 1xH100 (80GB), 100.0GB (disk) no $2.69
64+
7 nebius eu-north1 gpu-h100-sxm 16xCPU, 200GB, 1xH100 (80GB), 100.0GB (disk) no $2.95
65+
8 runpod AP-JP-1 NVIDIA H100 80GB HBM3 20xCPU, 251GB, 1xH100 (80GB), 100.0GB (disk) no $2.99
66+
9 runpod CA-MTL-1 NVIDIA H100 80GB HBM3 28xCPU, 251GB, 1xH100 (80GB), 100.0GB (disk) no $2.99
67+
10 runpod CA-MTL-2 NVIDIA H100 80GB HBM3 26xCPU, 125GB, 1xH100 (80GB), 100.0GB (disk) no $2.99
68+
...
69+
Shown 10 of 99 offers, $127.816 max
70+
```
71+
72+
</div>
73+
74+
### JSON format
75+
76+
Use `--json` to output offers in the JSON format.
77+
78+
<div class="termy">
79+
80+
```shell
81+
$ dstack offer --gpu amd --json
82+
{
83+
"project": "main",
84+
"user": "admin",
85+
"resources": {
86+
"cpu": {
87+
"min": 2,
88+
"max": null
89+
},
90+
"memory": {
91+
"min": 8.0,
92+
"max": null
93+
},
94+
"shm_size": null,
95+
"gpu": {
96+
"vendor": "amd",
97+
"name": null,
98+
"count": {
99+
"min": 1,
100+
"max": 1
101+
},
102+
"memory": null,
103+
"total_memory": null,
104+
"compute_capability": null
105+
},
106+
"disk": {
107+
"size": {
108+
"min": 100.0,
109+
"max": null
110+
}
111+
}
112+
},
113+
"max_price": null,
114+
"spot": null,
115+
"reservation": null,
116+
"offers": [
117+
{
118+
"backend": "runpod",
119+
"region": "EU-RO-1",
120+
"instance_type": "AMD Instinct MI300X OAM",
121+
"resources": {
122+
"cpus": 24,
123+
"memory_mib": 289792,
124+
"gpus": [
125+
{
126+
"name": "MI300X",
127+
"memory_mib": 196608,
128+
"vendor": "amd"
129+
}
130+
],
131+
"spot": false,
132+
"disk": {
133+
"size_mib": 102400
134+
},
135+
"description": "24xCPU, 283GB, 1xMI300X (192GB), 100.0GB (disk)"
136+
},
137+
"spot": false,
138+
"price": 2.49,
139+
"availability": "available"
140+
}
141+
],
142+
"total_offers": 1
143+
}
144+
```
145+
146+
</div>

mkdocs.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -231,6 +231,7 @@ nav:
231231
- dstack metrics: docs/reference/cli/dstack/metrics.md
232232
- dstack config: docs/reference/cli/dstack/config.md
233233
- dstack fleet: docs/reference/cli/dstack/fleet.md
234+
- dstack offer: docs/reference/cli/dstack/offer.md
234235
- dstack volume: docs/reference/cli/dstack/volume.md
235236
- dstack gateway: docs/reference/cli/dstack/gateway.md
236237
- API:

0 commit comments

Comments
 (0)