Skip to content

Commit 2b4c012

Browse files
[Docs] Improve Kubernetes documentation
Updated `README`, `Overview`, `Installation`
1 parent 7469f78 commit 2b4c012

5 files changed

Lines changed: 35 additions & 30 deletions

File tree

README.md

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,11 @@
1414

1515
</div>
1616

17-
`dstack` provides a unified control plane for running development, training, and inference on GPUs — across cloud VMs, Kubernetes, or on-prem clusters. It helps your team avoid vendor lock-in and reduce GPU costs.
17+
`dstack` is a unified control plane for GPU provisioning and orchestration that works with any GPU cloud, Kubernetes, or on-prem clusters.
1818

19-
#### Accelerators
19+
It streamlines development, training, and inference, and is compatible with any hardware, open-source tools, and frameworks.
20+
21+
#### Hardware
2022

2123
`dstack` supports `NVIDIA`, `AMD`, `Google TPU`, `Intel Gaudi`, and `Tenstorrent` accelerators out of the box.
2224

@@ -44,15 +46,15 @@
4446
4547
#### Set up the server
4648

47-
##### (Optional) Configure backends
49+
##### Configure backends
50+
51+
To orchestrate compute across cloud providers or existing Kubernetes clusters, you need to configure backends.
4852

49-
To use `dstack` with cloud providers, configure backends
50-
via the `~/.dstack/server/config.yml` file.
53+
Backends can be set up in `~/.dstack/server/config.yml` or through the [project settings page](../concepts/projects.md#backends) in the UI.
5154

52-
For more details on how to configure backends, check [Backends](https://dstack.ai/docs/concepts/backends).
55+
For more details, see [Backends](../concepts/backends.md).
5356

54-
> For using `dstack` with on-prem servers, create [SSH fleets](https://dstack.ai/docs/concepts/fleets#ssh)
55-
> once the server is up.
57+
> When using `dstack` with on-prem servers, backend configuration isn’t required. Simply create [SSH fleets](../concepts/fleets.md#ssh) once the server is up.
5658
5759
##### Start the server
5860

docs/docs/concepts/backends.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1080,9 +1080,9 @@ However, often [SSH fleets](../concepts/fleets.md#ssh) are a simpler and lighter
10801080
### SSH fleets
10811081

10821082
SSH fleets require no backend configuration.
1083-
All you need to do is [provide hostnames and SSH credentials](../concepts/fleets.md#ss), and `dstack` sets up a fleet that can orchestrate container-based runs on your servers.
1083+
All you need to do is [provide hostnames and SSH credentials](../concepts/fleets.md#ssh), and `dstack` sets up a fleet that can orchestrate container-based runs on your servers.
10841084

1085-
> SSH fleets support the same features as [VM-based](#vm-based) backends.
1085+
SSH fleets support the same features as [VM-based](#vm-based) backends.
10861086

10871087
!!! info "What's next"
10881088
1. See the [`~/.dstack/server/config.yml`](../reference/server/config.yml.md) reference

docs/docs/index.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,10 @@
11
# What is dstack?
22

3-
`dstack` is an open-source container orchestrator that simplifies workload orchestration
4-
and drives GPU utilization for ML teams. It works with any GPU cloud, on-prem cluster, or accelerated hardware.
3+
`dstack` is a unified control plane for GPU provisioning and orchestration that works with any GPU cloud, Kubernetes, or on-prem clusters.
54

6-
#### Accelerators
5+
It streamlines development, training, and inference, and is compatible with any hardware, open-source tools, and frameworks.
6+
7+
#### Hardware
78

89
`dstack` supports `NVIDIA`, `AMD`, `TPU`, `Intel Gaudi`, and `Tenstorrent` accelerators out of the box.
910

docs/docs/installation/index.md

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,21 @@
11
# Installation
22

3-
> If you don't want to host the `dstack` server (or want to access GPU marketplace),
4-
> skip installation and proceed to [dstack Sky :material-arrow-top-right-thin:{ .external }](https://sky.dstack.ai){:target="_blank"}.
3+
!!! info "dstack Sky"
4+
If you don't want to host the `dstack` server (or want to access GPU marketplace),
5+
skip installation and proceed to [dstack Sky :material-arrow-top-right-thin:{ .external }](https://sky.dstack.ai){:target="_blank"}.
56

67
## Set up the server
78

8-
### (Optional) Configure backends
9+
### Configure backends
910

10-
Backends allow `dstack` to manage compute across various providers.
11-
They can be configured via `~/.dstack/server/config.yml` (or through the [project settings page](../concepts/projects.md#backends) in the UI).
11+
To orchestrate compute across cloud providers or existing Kubernetes clusters, you need to configure backends.
1212

13-
For more details on how to configure backends, check [Backends](../concepts/backends.md).
13+
Backends can be set up in `~/.dstack/server/config.yml` or through the [project settings page](../concepts/projects.md#backends) in the UI.
14+
15+
For more details, see [Backends](../concepts/backends.md).
1416

1517
??? info "SSH fleets"
16-
For using `dstack` with on-prem servers, create [SSH fleets](../concepts/fleets.md#ssh)
17-
once the server is up.
18+
When using `dstack` with on-prem servers, backend configuration isn’t required. Simply create [SSH fleets](../concepts/fleets.md#ssh) once the server is up.
1819

1920
### Start the server
2021

docs/overrides/home.html

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -53,9 +53,9 @@
5353
<h1>The orchestration layer for modern ML teams</h1>
5454

5555
<p>
56-
<span class="highlight">dstack</span> provides a unified control plane for running development, training, and inference
57-
on GPUs — across cloud VMs, Kubernetes, or on-prem clusters. It helps your team avoid vendor lock-in and reduce GPU
58-
costs.
56+
<span class="highlight">dstack</span> provides ML teams with a unified control plane for GPU provisioning and orchestration
57+
across cloud, Kubernetes, and on-prem. It streamlines development, training, and inference — reducing costs 3–7x and
58+
preventing lock-in.
5959
</p>
6060
</div>
6161

@@ -83,14 +83,15 @@ <h1>The orchestration layer for modern ML teams</h1>
8383
<div class="tx-landing__major_feature">
8484
<div class="section">
8585
<div class="block margin right">
86-
<h2>One control plane for all your GPUs</h2>
86+
<h2>An open platform for GPU orchestration</h2>
8787
<p>
88-
Instead of wrestling with complex Helm charts and Kubernetes operators, <span class="highlight">dstack</span> provides a simple, declarative way to
89-
manage clusters, containerized dev environments, training, and inference.
88+
Managing AI infrastructure requires efficient GPU orchestration, whether workloads run
89+
on a single GPU cloud, across multiple GPU providers, or on-prem clusters.
9090
</p>
9191

92-
<p>This container-native interface makes your team more productive and your GPU usage more efficient—leading to lower
93-
costs and faster iteration.
92+
<p>
93+
<span class="highlight">dstack</span> provides an open stack for GPU orchestration that streamlines development, training,
94+
and inference, and can be used with any hardware, open-source tools, and frameworks.
9495
</p>
9596

9697
<!-- TODO: Add `Why dstack?` -->
@@ -219,7 +220,7 @@ <h2>Easy to use with on-prem clusters</h2>
219220
</svg></span>
220221
</a>
221222

222-
<a href="/docs/concepts/fleets#ssh" target="_blank" class="md-button md-button-secondary small">
223+
<a href="/docs/concepts/backends#ssh-fleets" target="_blank" class="md-button md-button-secondary small">
223224
<span>SSH fleets</span>
224225
<span class="icon"><svg viewBox="0 0 13 10" xmlns="http://www.w3.org/2000/svg">
225226
<path

0 commit comments

Comments
 (0)