|
1 | 1 | --- |
2 | 2 | title: Resource constraints |
3 | 3 | weight: 30 |
4 | | -description: Specify the runtime options for a container |
5 | | -keywords: docker, daemon, configuration, runtime |
| 4 | +description: Limit container memory and CPU usage with runtime configuration flags |
| 5 | +keywords: resource constraints, memory limits, CPU limits, cgroups, OOM, swap, docker run, memory swap |
6 | 6 | aliases: |
7 | 7 | - /engine/admin/resource_constraints/ |
8 | 8 | - /config/containers/resource_constraints/ |
@@ -265,84 +265,5 @@ If the kernel or Docker daemon isn't configured correctly, an error occurs. |
265 | 265 |
|
266 | 266 | ## GPU |
267 | 267 |
|
268 | | -### Access an NVIDIA GPU |
269 | | - |
270 | | -#### Prerequisites |
271 | | - |
272 | | -Visit the official [NVIDIA drivers page](https://www.nvidia.com/Download/index.aspx) |
273 | | -to download and install the proper drivers. Reboot your system once you have |
274 | | -done so. |
275 | | - |
276 | | -Verify that your GPU is running and accessible. |
277 | | - |
278 | | -#### Install nvidia-container-toolkit |
279 | | - |
280 | | -Follow the official NVIDIA Container Toolkit [installation instructions](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html). |
281 | | - |
282 | | -#### Expose GPUs for use |
283 | | - |
284 | | -Include the `--gpus` flag when you start a container to access GPU resources. |
285 | | -Specify how many GPUs to use. For example: |
286 | | - |
287 | | -```console |
288 | | -$ docker run -it --rm --gpus all ubuntu nvidia-smi |
289 | | -``` |
290 | | - |
291 | | -Exposes all available GPUs and returns a result akin to the following: |
292 | | - |
293 | | -```bash |
294 | | -+-------------------------------------------------------------------------------+ |
295 | | -| NVIDIA-SMI 384.130 Driver Version: 384.130 | |
296 | | -|-------------------------------+----------------------+------------------------+ |
297 | | -| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | |
298 | | -| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |
299 | | -|===============================+======================+========================| |
300 | | -| 0 GRID K520 Off | 00000000:00:03.0 Off | N/A | |
301 | | -| N/A 36C P0 39W / 125W | 0MiB / 4036MiB | 0% Default | |
302 | | -+-------------------------------+----------------------+------------------------+ |
303 | | -+-------------------------------------------------------------------------------+ |
304 | | -| Processes: GPU Memory | |
305 | | -| GPU PID Type Process name Usage | |
306 | | -|===============================================================================| |
307 | | -| No running processes found | |
308 | | -+-------------------------------------------------------------------------------+ |
309 | | -``` |
310 | | - |
311 | | -Use the `device` option to specify GPUs. For example: |
312 | | - |
313 | | -```console |
314 | | -$ docker run -it --rm --gpus device=GPU-3a23c669-1f69-c64e-cf85-44e9b07e7a2a ubuntu nvidia-smi |
315 | | -``` |
316 | | - |
317 | | -Exposes that specific GPU. |
318 | | - |
319 | | -```console |
320 | | -$ docker run -it --rm --gpus '"device=0,2"' ubuntu nvidia-smi |
321 | | -``` |
322 | | - |
323 | | -Exposes the first and third GPUs. |
324 | | - |
325 | | -> [!NOTE] |
326 | | -> |
327 | | -> NVIDIA GPUs can only be accessed by systems running a single engine. |
328 | | -
|
329 | | -#### Set NVIDIA capabilities |
330 | | - |
331 | | -You can set capabilities manually. For example, on Ubuntu you can run the |
332 | | -following: |
333 | | - |
334 | | -```console |
335 | | -$ docker run --gpus 'all,capabilities=utility' --rm ubuntu nvidia-smi |
336 | | -``` |
337 | | - |
338 | | -This enables the `utility` driver capability which adds the `nvidia-smi` tool to |
339 | | -the container. |
340 | | - |
341 | | -Capabilities as well as other configurations can be set in images via |
342 | | -environment variables. More information on valid variables can be found in the |
343 | | -[nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/docker-specialized.html) |
344 | | -documentation. These variables can be set in a Dockerfile. |
345 | | - |
346 | | -You can also use CUDA images, which set these variables automatically. See the |
347 | | -official [CUDA images](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/cuda) |
348 | | -NGC catalog page. |
| 268 | +For information on how to access NVIDIA GPUs from a container, see |
| 269 | +[GPU access](gpu.md). |
0 commit comments