GPU accelerated, multi-arch (linux/amd64, linux/arm64/v8) docker images:
Images available for MAX versions ≥ 24.6.0.
Build chain
The same as the JupyterLab MAX/Mojo docker stack.
Features
The same as the JupyterLab MAX/Mojo docker stack plus the CUDA runtime.
👉 See the CUDA Version Matrix for detailed information.
Subtags
The same as the JupyterLab MAX/Mojo docker stack.
The same as the JupyterLab MAX/Mojo docker stack plus
- NVIDIA GPU
- NVIDIA Linux driver
- NVIDIA Container Toolkit
ℹ️ The host running the GPU accelerated images only requires the NVIDIA driver, the CUDA toolkit does not have to be installed.
To install the NVIDIA Container Toolkit, follow the instructions for your platform:
latest:
cd base
docker build \
--build-arg BASE_IMAGE=ubuntu \
--build-arg BASE_IMAGE_TAG=24.04 \
--build-arg BUILD_ON_IMAGE=glcr.b-data.ch/cuda/python/ver \
--build-arg MOJO_VERSION=26.2.0 \
--build-arg PYTHON_VERSION=3.14.4 \
--build-arg CUDA_IMAGE_FLAVOR=base \
--build-arg INSTALL_MAX=1 \
-t jupyterlab/cuda/max/base \
-f latest.Dockerfile .version:
cd base
docker build \
--build-arg BASE_IMAGE=ubuntu \
--build-arg BASE_IMAGE_TAG=24.04 \
--build-arg BUILD_ON_IMAGE=glcr.b-data.ch/cuda/python/ver \
--build-arg CUDA_IMAGE_FLAVOR=base \
--build-arg INSTALL_MAX=1 \
-t jupyterlab/cuda/max/base:MAJOR.MINOR.PATCH \
-f MAJOR.MINOR.PATCH.Dockerfile .For MAJOR.MINOR.PATCH ≥ 24.6.0.
Create an empty directory using docker:
docker run --rm \
-v "${PWD}/jupyterlab-jovyan":/dummy \
alpine chown 1000:100 /dummyIt will be bind mounted as the JupyterLab user's home directory and automatically populated.
self built:
docker run -it --rm \
--gpus '"device=all"' \
-p 8888:8888 \
-u root \
-v "${PWD}/jupyterlab-jovyan":/home/jovyan \
-e NB_UID=$(id -u) \
-e NB_GID=$(id -g) \
jupyterlab/cuda/max/base[:MAJOR.MINOR.PATCH]from the project's GitLab Container Registries:
docker run -it --rm \
--gpus '"device=all"' \
-p 8888:8888 \
-u root \
-v "${PWD}/jupyterlab-jovyan":/home/jovyan \
-e NB_UID=$(id -u) \
-e NB_GID=$(id -g) \
IMAGE[:MAJOR[.MINOR[.PATCH]]]IMAGE being one of
The use of the -v flag in the command mounts the empty directory on the host
(${PWD}/jupyterlab-jovyan in the command) as /home/jovyan in the container.
-e NB_UID=$(id -u) -e NB_GID=$(id -g) instructs the startup script to switch
the user ID and the primary group ID of ${NB_USER} to the user and group ID of
the one executing the command.
The server logs appear in the terminal.
Create an empty home directory:
mkdir "${PWD}/jupyterlab-root"Use the following command to run the container as root:
podman run -it --rm \
--device 'nvidia.com/gpu=all' \
-p 8888:8888 \
-u root \
-v "${PWD}/jupyterlab-root":/home/root \
-e NB_USER=root \
-e NB_UID=0 \
-e NB_GID=0 \
-e NOTEBOOK_ARGS="--allow-root" \
IMAGE[:MAJOR[.MINOR[.PATCH]]]Creating a home directory might not be required. Also
docker run -it --rm \
--gpus '"device=all"' \
-p 8888:8888 \
-v "${PWD}/jupyterlab-jovyan":/home/jovyan \
IMAGE[:MAJOR[.MINOR[.PATCH]]]might be sufficient.
What makes this project different:
- Derived from
nvidia/cuda:base-ubuntu24.04 - IDE: code-server next to JupyterLab
- Just Python – no Conda / Mamba
The CUDA-based JupyterLab MAX docker stack is derived from the CUDA-based Python
docker stack.
ℹ️ See also Python docker stack > Notes on CUDA.
