diff --git a/.gitignore b/.gitignore
index e9b1a6093..28878cc1f 100644
--- a/.gitignore
+++ b/.gitignore
@@ -24,3 +24,4 @@ build/
.vscode
.aider*
uv.lock
+.local/
diff --git a/docs/docs/guides/clusters.md b/docs/docs/guides/clusters.md
index 0231a61c3..b8f6c3a8f 100644
--- a/docs/docs/guides/clusters.md
+++ b/docs/docs/guides/clusters.md
@@ -76,5 +76,5 @@ Refer to [instance volumes](../concepts/volumes.md#instance) for an example.
!!! info "What's next?"
1. Read about [distributed tasks](../concepts/tasks.md#distributed-tasks), [fleets](../concepts/fleets.md), and [volumes](../concepts/volumes.md)
- 2. Browse the [Clusters](../../examples.md#clusters) examples
+ 2. Browse the [Clusters](../../examples.md#clusters) and [Distributed training](../../examples.md#distributed-training) examples
diff --git a/docs/examples.md b/docs/examples.md
index 128640b1e..c28e40eb0 100644
--- a/docs/examples.md
+++ b/docs/examples.md
@@ -83,6 +83,22 @@ hide:
+## Distributed training
+
+
+
## Inference
@@ -128,7 +144,7 @@ hide:
TensorRT-LLM
- Deploy DeepSeek R1 and its distilled version with TensorRT-LLM
+ Deploy DeepSeek models with TensorRT-LLM
diff --git a/docs/examples/distributed-training/ray-ragen/index.md b/docs/examples/distributed-training/ray-ragen/index.md
new file mode 100644
index 000000000..e69de29bb
diff --git a/docs/overrides/main.html b/docs/overrides/main.html
index f6b9abf8c..5725ca145 100644
--- a/docs/overrides/main.html
+++ b/docs/overrides/main.html
@@ -119,6 +119,7 @@
+
diff --git a/examples/.dstack.yml b/examples/.dstack.yml
index fd14ffe1d..1e47c9a73 100644
--- a/examples/.dstack.yml
+++ b/examples/.dstack.yml
@@ -2,14 +2,15 @@ type: dev-environment
# The name is optional, if not specified, generated randomly
name: vscode
-python: "3.11"
-# Uncomment to use a custom Docker image
-#image: dstackai/base:py3.13-0.7-cuda-12.1
+#python: "3.11"
+
+image: un1def/dstack-base:py3.12-dev-cuda-12.1
ide: vscode
# Use either spot or on-demand instances
-spot_policy: auto
+#spot_policy: auto
resources:
- gpu: 1
+ cpu: x86:8..32
+ gpu: 24GB..:1
diff --git a/examples/distributed-training/ray-ragen/.dstack.yml b/examples/distributed-training/ray-ragen/.dstack.yml
new file mode 100644
index 000000000..8dabde9e0
--- /dev/null
+++ b/examples/distributed-training/ray-ragen/.dstack.yml
@@ -0,0 +1,39 @@
+type: task
+name: ray-ragen-cluster
+
+nodes: 2
+
+env:
+- WANDB_API_KEY
+image: whatcanyousee/verl:ngc-cu124-vllm0.8.5-sglang0.4.6-mcore0.12.0-te2.2
+commands:
+ - wget -O miniconda.sh https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
+ - bash miniconda.sh -b -p /workflow/miniconda
+ - eval "$(/workflow/miniconda/bin/conda shell.bash hook)"
+ - git clone https://github.com/RAGEN-AI/RAGEN.git
+ - cd RAGEN
+ - bash scripts/setup_ragen.sh
+ - conda activate ragen
+ - cd verl
+ - pip install --no-deps -e .
+ - pip install hf_transfer hf_xet
+ - pip uninstall -y ray
+ - pip install -U "ray[default]"
+ - |
+ if [ $DSTACK_NODE_RANK = 0 ]; then
+ ray start --head --port=6379;
+ else
+ ray start --address=$DSTACK_MASTER_NODE_IP:6379
+ fi
+
+# Expose Ray dashboard port
+ports:
+ - 8265
+
+resources:
+ gpu: 80GB:8
+ shm_size: 128GB
+
+# Save checkpoints on the instance
+volumes:
+ - /checkpoints:/checkpoints
diff --git a/examples/distributed-training/ray-ragen/README.md b/examples/distributed-training/ray-ragen/README.md
new file mode 100644
index 000000000..35f7afaea
--- /dev/null
+++ b/examples/distributed-training/ray-ragen/README.md
@@ -0,0 +1,133 @@
+# Ray + RAGEN
+
+This example shows how use `dstack` and [RAGEN :material-arrow-top-right-thin:{ .external }](https://github.com/RAGEN-AI/RAGEN){:target="_blank"}
+to fine-tune an agent on mulitiple nodes.
+
+Under the hood `RAGEN` uses [verl :material-arrow-top-right-thin:{ .external }](https://github.com/volcengine/verl){:target="_blank"} for Reinforcement Learning and [Ray :material-arrow-top-right-thin:{ .external }](https://docs.ray.io/en/latest/){:target="_blank"} for ditributed training.
+
+## Create fleet
+
+Before submitted disributed training runs, make sure to create a fleet with a `placement` set to `cluster`.
+
+> For more detials on how to use clusters with `dstack`, check the [Clusters](https://dstack.ai/docs/guides/clusters) guide.
+
+## Run a Ray cluster
+
+If you want to use Ray with `dstack`, you have to first run a Ray cluster.
+
+The task below runs a Ray cluster on an existing fleet:
+
+
+
+```yaml
+type: task
+name: ray-ragen-cluster
+
+nodes: 2
+
+env:
+- WANDB_API_KEY
+image: whatcanyousee/verl:ngc-cu124-vllm0.8.5-sglang0.4.6-mcore0.12.0-te2.2
+commands:
+ - wget -O miniconda.sh https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
+ - bash miniconda.sh -b -p /workflow/miniconda
+ - eval "$(/workflow/miniconda/bin/conda shell.bash hook)"
+ - git clone https://github.com/RAGEN-AI/RAGEN.git
+ - cd RAGEN
+ - bash scripts/setup_ragen.sh
+ - conda activate ragen
+ - cd verl
+ - pip install --no-deps -e .
+ - pip install hf_transfer hf_xet
+ - pip uninstall -y ray
+ - pip install -U "ray[default]"
+ - |
+ if [ $DSTACK_NODE_RANK = 0 ]; then
+ ray start --head --port=6379;
+ else
+ ray start --address=$DSTACK_MASTER_NODE_IP:6379
+ fi
+
+# Expose Ray dashboard port
+ports:
+ - 8265
+
+resources:
+ gpu: 80GB:8
+ shm_size: 128GB
+
+# Save checkpoints on the instance
+volumes:
+ - /checkpoints:/checkpoints
+```
+
+
+
+We are using verl's docker image for vLLM with FSDP. See [Installation :material-arrow-top-right-thin:{ .external }](https://verl.readthedocs.io/en/latest/start/install.html){:target="_blank"} for more.
+
+The `RAGEN` setup script `scripts/setup_ragen.sh` isolates dependencies within Conda environment.
+
+Note that the Ray setup in the RAGEN environment is missing the dashboard, so we reinstall it using `ray[default]`.
+
+Now, if you run this task via `dstack apply`, it will automatically forward the Ray's dashboard port to `localhost:8265`.
+
+
+
+```shell
+$ dstack apply -f examples/distributed-training/ray-ragen/.dstack.yml
+```
+
+
+
+As long as the `dstack apply` is attached, you can use `localhost:8265` to submit Ray jobs for execution.
+If `dstack apply` is detached, you can use `dstack attach` to re-attach.
+
+## Submit Ray jobs
+
+Before you can submit Ray jobs, ensure to install `ray` locally:
+
+
+
+```shell
+$ pip install ray
+```
+
+
+
+Now you can submit the training job to the Ray cluster which is available at `localhost:8265`:
+
+
+
+```shell
+$ RAY_ADDRESS=http://localhost:8265
+$ ray job submit \
+ -- bash -c "\
+ export PYTHONPATH=/workflow/RAGEN; \
+ cd /workflow/RAGEN; \
+ /workflow/miniconda/envs/ragen/bin/python train.py \
+ --config-name base \
+ system.CUDA_VISIBLE_DEVICES=[0,1,2,3,4,5,6,7] \
+ model_path=Qwen/Qwen2.5-7B-Instruct \
+ trainer.experiment_name=agent-fine-tuning-Qwen2.5-7B \
+ trainer.n_gpus_per_node=8 \
+ trainer.nnodes=2 \
+ micro_batch_size_per_gpu=2 \
+ trainer.default_local_dir=/checkpoints \
+ trainer.save_freq=50 \
+ actor_rollout_ref.rollout.tp_size_check=False \
+ actor_rollout_ref.rollout.tensor_model_parallel_size=4"
+```
+
+
+
+!!! info "Training parameters"
+ 1. `actor_rollout_ref.rollout.tensor_model_parallel_size=4`, because `Qwen/Qwen2.5-7B-Instruct` has 28 attention heads and number of attention heads should be divisible by `tensor_model_parallel_size`
+ 2. `actor_rollout_ref.rollout.tp_size_check=False`, if True `tensor_model_parallel_size` should be equal to `trainer.n_gpus_per_node`
+ 3. `micro_batch_size_per_gpu=2`, to keep the RAGEN-paper's `rollout_filter_ratio` and `es_manager` settings as it is for world size `16`
+
+Using Ray via `dstack` is a powerful way to get access to the rich Ray ecosystem while benefiting from `dstack`'s provisioning capabilities.
+
+!!! info "What's next"
+ 1. Check the [Clusters](https://dstack.ai/docs/guides/clusters) guide
+ 2. Read about [distributed tasks](https://dstack.ai/docs/concepts/tasks#distributed-tasks) and [fleets](https://dstack.ai/docs/concepts/fleets)
+ 3. Browse Ray's [docs :material-arrow-top-right-thin:{ .external }](https://docs.ray.io/en/latest/train/examples.html){:target="_blank"} for other examples.
diff --git a/examples/misc/ray/README.md b/examples/misc/ray/README.md
index bf336b6ac..d4ba3dc15 100644
--- a/examples/misc/ray/README.md
+++ b/examples/misc/ray/README.md
@@ -33,7 +33,7 @@ name: ray-cluster
nodes: 4
commands:
- pip install -U "ray[default]"
- - >
+ - |
if [ $DSTACK_NODE_RANK = 0 ]; then
ray start --head --port=6379;
else
diff --git a/mkdocs.yml b/mkdocs.yml
index 67b19b368..486ed7e51 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -264,6 +264,8 @@ nav:
- RCCL tests: examples/clusters/rccl-tests/index.md
- A3 Mega: examples/clusters/a3mega/index.md
- A3 High: examples/clusters/a3high/index.md
+ - Distributed training:
+ - Ray+RAGEN: examples/distributed-training/ray-ragen/index.md
- Deployment:
- SGLang: examples/inference/sglang/index.md
- vLLM: examples/inference/vllm/index.md