|
| 1 | +--- |
| 2 | +title: Supporting Hot Aisle AMD AI Developer Cloud |
| 3 | +date: 2025-08-11 |
| 4 | +description: "TBA" |
| 5 | +slug: hotaisle |
| 6 | +image: https://dstack.ai/static-assets/static-assets/images/dstack-hotaisle.png |
| 7 | +categories: |
| 8 | + - Changelog |
| 9 | +--- |
| 10 | + |
| 11 | +# Supporting Hot Aisle AMD AI Developer Cloud |
| 12 | + |
| 13 | +As the ecosystem around AMD GPUs matures, developers are looking for easier ways to experiment with ROCm, benchmark new architectures, and run cost-effective workloads—without manual infrastructure setup. |
| 14 | + |
| 15 | +`dstack` is an open-source orchestrator designed for AI workloads, providing a lightweight, container-native alternative to Kubernetes and Slurm. |
| 16 | + |
| 17 | +<img src="https://dstack.ai/static-assets/static-assets/images/dstack-hotaisle.png" width="630"/> |
| 18 | + |
| 19 | +Today, we’re excited to announce native integration with [Hot Aisle :material-arrow-top-right-thin:{ .external }](https://www.hotaisle.io/){:target="_blank"}, an AMD-only GPU neocloud offering VMs and clusters at highly competitive on-demand pricing. |
| 20 | + |
| 21 | +<!-- more --> |
| 22 | + |
| 23 | +## About Hot Aisle |
| 24 | + |
| 25 | +Hot Aisle is a next-generation GPU cloud built around AMD’s flagship AI accelerators. |
| 26 | + |
| 27 | +Highlights: |
| 28 | + |
| 29 | +- AMD’s flagship AI-optimized accelrators |
| 30 | +- On-demand pricing: $1.99/hour for 1-GPU VMs |
| 31 | +- No commitment – start and stop when you want |
| 32 | +- First AMD-only GPU backend in `dstack` |
| 33 | + |
| 34 | +While it has already been possible to use HotAisle’s 8-GPU MI300X bare-metal clusters via [`SSH fleets`](../../docs/concepts/fleets.md#ssh-fleets), this integration now enables automated provisioning of VMs—made possible by HotAisle’s newly added API for MI300X instances. |
| 35 | + |
| 36 | +## Why dstack |
| 37 | + |
| 38 | +`dstack` is a new open-source container orchestrator built specifically for GPU workloads. |
| 39 | +It fills the gaps left by Kubernetes and Slurm when it comes to GPU provisioning and orchestration: |
| 40 | + |
| 41 | +- Unlike Kubernetes, `dstack` offers a high-level, AI-engineer-friendly interface, and GPUs work out of the box, with no need to wrangle custom operators, device plugins, or other low-level setup. |
| 42 | +- Unlike Slurm, it’s use-case agnostic — equally suited for training, inference, benchmarking, or even setting up long-running dev environments. |
| 43 | +- It works across clouds and on-prem without vendor lock-in. |
| 44 | + |
| 45 | +With the new Hot Aisle backend, you can automatically provision MI300X VMs for any workload — from experiments to production — with a single `dstack` CLI command. |
| 46 | + |
| 47 | +## Getting started |
| 48 | + |
| 49 | +Before configuring `dstack` to use Hot Aisle’s VMs, complete these steps: |
| 50 | + |
| 51 | +1. Create a project via `ssh admin.hotaisle.app` |
| 52 | +2. Get credits or approve a payment method |
| 53 | +3. Create an API key |
| 54 | + |
| 55 | +Then, configure the backend in `~/.dstack/server/config.yml`: |
| 56 | + |
| 57 | +<div editor-title="~/.dstack/server/config.yml"> |
| 58 | + |
| 59 | +```yaml |
| 60 | +projects: |
| 61 | +- name: main |
| 62 | + backends: |
| 63 | + - type: hotaisle |
| 64 | + team_handle: hotaisle-team-handle |
| 65 | + creds: |
| 66 | + type: api_key |
| 67 | + api_key: 9c27a4bb7a8e472fae12ab34.3f2e3c1db75b9a0187fd2196c6b3e56d2b912e1c439ba08d89e7b6fcd4ef1d3f |
| 68 | +``` |
| 69 | +
|
| 70 | +</div> |
| 71 | +
|
| 72 | +Install and start the `dstack` server: |
| 73 | + |
| 74 | +<div class="termy"> |
| 75 | + |
| 76 | +```shell |
| 77 | +$ pip install "dstack[server]" |
| 78 | +$ dstack server |
| 79 | +``` |
| 80 | + |
| 81 | +</div> |
| 82 | + |
| 83 | +For more details, see [Installation](../../docs/installation/index.md). |
| 84 | + |
| 85 | +Use the `dstack` CLI to |
| 86 | +manage [dev environments](../../docs/concepts/dev-environments.md), [tasks](../../docs/concepts/tasks.md), |
| 87 | +and [services](../../docs/concepts/services.md). |
| 88 | + |
| 89 | +<div class="termy"> |
| 90 | + |
| 91 | +```shell |
| 92 | +$ dstack apply -f .dstack.yml |
| 93 | +
|
| 94 | + # BACKEND RESOURCES INSTANCE TYPE PRICE |
| 95 | + 1 hotaisle (us-michigan-1) cpu=13 mem=224GB disk=12288GB MI300X:192GB:1 1x MI300X 13x Xeon Platinum 8470 $1.99 |
| 96 | + 2 hotaisle (us-michigan-1) cpu=8 mem=224GB disk=12288GB MI300X:192GB:1 1x MI300X 8x Xeon Platinum 8470 $1.99 |
| 97 | + |
| 98 | + Submit the run? [y/n]: |
| 99 | +``` |
| 100 | + |
| 101 | +</div> |
| 102 | + |
| 103 | +Currently, `dstack` supports 1xGPU Hot Aisle VMs. Support for 8xGPU VMs will be added once Hot Aisle supports it. |
| 104 | + |
| 105 | +> If you prefer to use Hot Aisle’s bare-metal 8-GPU clusters with dstack, you can create an [SSH fleet](../../docs/concepts/fleets.md#ssh-fleets). |
| 106 | +> This way, you’ll be able to run [distributed tasks](../../docs/concepts/tasks.md#distributed-tasks) efficiently across the cluster. |
| 107 | + |
| 108 | +!!! info "What's next?" |
| 109 | + 1. Check [Quickstart](../../docs/quickstart.md) |
| 110 | + 2. Learn more about [Hot Aisle :material-arrow-top-right-thin:{ .external }](https://hotaisle.xyz/){:target="_blank"} |
| 111 | + 3. Explore [dev environments](../../docs/concepts/dev-environments.md), |
| 112 | + [tasks](../../docs/concepts/tasks.md), [services](../../docs/concepts/services.md), |
| 113 | + and [fleets](../../docs/concepts/fleets.md) |
| 114 | + 4. Join [Discord :material-arrow-top-right-thin:{ .external }](https://discord.gg/u8SmfwPpMd){:target="_blank"} |
0 commit comments