You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this example resource driver, no "actual" GPUs are made available to any
184
156
containers. Instead, a set of environment variables are set in each container
185
157
to indicate which GPUs *would* have been injected into them by a real resource
186
158
driver and how they *would* have been configured.
187
159
188
160
For the full list of all 8 available examples, see [`demo/README.md`](demo/README.md).
189
-
To run multiple examples at the same time, increase `kubeletPlugin.numDevices`
190
-
when installing the Helm chart.
191
161
192
162
### Demo DRA Admin Access Feature
193
163
This example driver includes support for the [DRA AdminAccess feature](https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/#admin-access), which allows administrators to gain privileged access to devices already in use by other users. This example demonstrates the end-to-end flow by setting the `DRA_ADMIN_ACCESS` environment variable. A driver managing real devices could use this to expose host hardware information.
@@ -205,7 +175,14 @@ To run this demo:
205
175
206
176
### Clean Up
207
177
208
-
Once you are done, delete the `kind` cluster:
178
+
Once you have verified everything is running correctly, delete the example apps:
This directory contains example workloads that demonstrate different ways to
4
-
request and configure GPU devices using Dynamic Resource Allocation (DRA).
4
+
request and configure devices using Dynamic Resource Allocation (DRA).
5
5
6
-
## Quick Start
6
+
Examples prefixed with `basic-` are featured in the
7
+
[main README walkthrough](../README.md) and are a good starting point for
8
+
learning about DRA.
7
9
8
-
The following three examples are featured in the [main README walkthrough](../README.md)
9
-
and are designed to run together with the default cluster configuration (2 GPUs):
10
-
11
-
| Example | Description | GPUs |
12
-
|---|---|---|
13
-
|[two-pods-one-gpu-each.yaml](two-pods-one-gpu-each.yaml)| Two pods each get their own exclusive GPU | 2 |
14
-
|[shared-gpu-across-containers.yaml](shared-gpu-across-containers.yaml)| Two containers in one pod share a single GPU | 1 |
15
-
|[gpu-sharing-strategies.yaml](gpu-sharing-strategies.yaml)| TimeSlicing and SpacePartitioning on two GPUs | 2 |
16
-
17
-
## All Examples
18
-
19
-
| Example | Description | GPUs | Key Concept |
20
-
|---|---|---|---|
21
-
|[two-pods-one-gpu-each.yaml](two-pods-one-gpu-each.yaml)| Two pods, each requesting one exclusive GPU | 2 | ResourceClaimTemplate basics |
22
-
|[one-pod-two-gpus.yaml](one-pod-two-gpus.yaml)| One container requesting two distinct GPUs | 2 | Multiple requests in a claim |
23
-
|[shared-gpu-across-containers.yaml](shared-gpu-across-containers.yaml)| Two containers sharing one GPU within a pod | 1 | Intra-pod GPU sharing |
24
-
|[shared-global-claim.yaml](shared-global-claim.yaml)| Two pods sharing a GPU via a pre-created ResourceClaim | 1 | ResourceClaim vs ResourceClaimTemplate |
0 commit comments