Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
104 changes: 104 additions & 0 deletions docker/Dockerfile.newton
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
# Copyright (c) 2022-2026, The Isaac Lab Project Developers (https://github.com/isaac-sim/IsaacLab/blob/main/CONTRIBUTORS.md).
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Copyright year mismatch: All existing Dockerfiles on develop use 2022-2025. This file has 2022-2026. This will likely fail the license/copyright CI check once approved. Please align with the repo convention.

# All rights reserved.
#
# SPDX-License-Identifier: BSD-3-Clause

# Newton-only image for headless HPC training (no Omniverse/Isaac Sim).
# Designed for Singularity/Apptainer conversion on SLURM clusters.
#
# Build:
# docker build -t isaac-lab-newton -f docker/Dockerfile.newton .
#
# Run:
# docker run --rm --gpus all isaac-lab-newton \
# python /workspace/isaaclab/scripts/reinforcement_learning/rsl_rl/train.py \
# --task Isaac-Reach-Franka-v0 --num_envs 64 --headless \
# env.sim.physics=newton --max_iterations 5
#
# Singularity conversion:
# docker save isaac-lab-newton -o isaac-lab-newton.tar
# apptainer build isaac-lab-newton.sif docker-archive://isaac-lab-newton.tar

FROM nvidia/cuda:12.8.1-devel-ubuntu22.04

SHELL ["/bin/bash", "-c"]

LABEL description="Isaac Lab Newton-only image for headless HPC training (no Omniverse/Isaac Sim)."

ARG ISAACLAB_PATH_ARG=/workspace/isaaclab
ENV ISAACLAB_PATH=${ISAACLAB_PATH_ARG}

ENV LANG=C.UTF-8
ENV DEBIAN_FRONTEND=noninteractive

# ---- System packages + Python 3.12 ----
RUN apt-get update && \
apt-get install -y --no-install-recommends software-properties-common && \
add-apt-repository -y ppa:deadsnakes/ppa && \
apt-get update && \
apt-get install -y --no-install-recommends \
python3.12 \
python3.12-dev \
python3.12-venv \
build-essential \
cmake \
git \
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Consider: For a headless training container, do you actually need libgl1-mesa-glx (OpenGL) and libusb-1.0-0 (USB)? These add image size for what's meant to be a lightweight container. If they're transitive deps (e.g., OpenCV via torchvision), keep them but add a comment explaining why.

libglib2.0-0 \
libgl1-mesa-glx \
libusb-1.0-0 \
ncurses-term \
wget && \
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Minor: apt -y autoremove && apt clean autoclean triggers a stability warning from apt. Use apt-get -y autoremove && apt-get clean && rm -rf /var/lib/apt/lists/* to match Dockerfile.base convention. (The rm -rf is already partially here but combined with autoclean which isn't needed with apt-get clean.)

apt -y autoremove && apt clean autoclean && \
rm -rf /var/lib/apt/lists/*
Comment on lines +51 to +52
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 apt used instead of apt-get in non-interactive cleanup

apt is intended for interactive terminal use and may produce warnings (or behave differently) in non-interactive scripts. Docker best-practice is to use apt-get, which has a stable, scriptable CLI. All other Dockerfiles in this repo use apt-get exclusively.

Additionally, apt clean autoclean is not valid apt syntax — apt accepts one subcommand at a time, so autoclean is silently ignored here. The correct invocation uses apt-get:

Suggested change
apt -y autoremove && apt clean autoclean && \
rm -rf /var/lib/apt/lists/*
apt-get -y autoremove && apt-get clean && apt-get autoclean && \


# ---- Python virtual environment ----
RUN python3.12 -m venv /opt/isaaclab-venv
ENV PATH="/opt/isaaclab-venv/bin:$PATH"
Comment on lines +54 to +56
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 VIRTUAL_ENV not exported alongside PATH

The venv is activated by prepending its bin/ to PATH, but the VIRTUAL_ENV environment variable is never set. Many Python tooling components — including the command_install logging in isaaclab/cli/commands/install.py and pip itself — check VIRTUAL_ENV to confirm they are operating inside a virtual environment. Without it, tools that inspect this variable will not detect the venv, which can lead to unexpected behaviour (e.g., packages being installed into the wrong location if PATH resolution is ever disrupted).

Suggested change
# ---- Python virtual environment ----
RUN python3.12 -m venv /opt/isaaclab-venv
ENV PATH="/opt/isaaclab-venv/bin:$PATH"
RUN python3.12 -m venv /opt/isaaclab-venv
ENV VIRTUAL_ENV=/opt/isaaclab-venv
ENV PATH="/opt/isaaclab-venv/bin:$PATH"


RUN pip install --no-cache-dir --upgrade pip setuptools wheel

# ---- Install PyTorch (CUDA 12.8) ----
# Cached as a separate layer since it's ~2GB and rarely changes.
RUN pip install --no-cache-dir \
torch==2.10.0+cu128 \
torchvision==0.25.0+cu128 \
--index-url https://download.pytorch.org/whl/cu128

# ---- Copy Isaac Lab source tree ----
COPY . ${ISAACLAB_PATH}
RUN chmod +x ${ISAACLAB_PATH}/isaaclab.sh
Comment on lines +68 to +69
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Missing Windows line-ending fix for shell scripts

All other Dockerfiles in this repo (Dockerfile.base, Dockerfile.curobo) include a step to strip Windows-style \r carriage returns from .sh files immediately after the COPY step:

RUN find ${ISAACLAB_PATH} -type f -name "*.sh" -exec sed -i 's/\r$//' {} +

If this image is built on Windows (Docker Desktop / WSL), isaaclab.sh may contain \r\n line endings, causing the build to fail at the RUN ${ISAACLAB_PATH}/isaaclab.sh -i step with:

bash: /workspace/isaaclab/isaaclab.sh: /usr/bin/env bash^M: bad interpreter: No such file or directory

The fix should be added between the COPY and chmod steps, consistent with the other Dockerfiles.


# ---- Install Isaac Lab packages via isaaclab.sh ----
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Cache invalidation: This copies the entire repo in one shot. Dockerfile.base uses a two-stage pattern — first copying only files needed for dependency installation (isaaclab.*, pyproject.toml, tools/, source/), running isaaclab.sh --install, then copying the rest. With the current single COPY, any change to any file (README, docs, scripts) busts the cache and forces a full isaaclab.sh -i rebuild.

Suggested:

COPY isaaclab.* environment.yml pyproject.toml ${ISAACLAB_PATH}/
COPY tools/ ${ISAACLAB_PATH}/tools/
COPY source/ ${ISAACLAB_PATH}/source/
RUN find ${ISAACLAB_PATH} -type f -name "*.sh" -exec sed -i 's/\r$//' {} +
RUN chmod +x ${ISAACLAB_PATH}/isaaclab.sh
RUN ${ISAACLAB_PATH}/isaaclab.sh -i
# Copy remaining files (scripts, docs, etc.)
COPY . ${ISAACLAB_PATH}

# This installs all extensions including Newton, RL frameworks, etc.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Missing Windows line-ending fix: Dockerfile.base runs find ${ISAACLAB_PATH} -type f -name "*.sh" -exec sed -i 's/\r$//' {} + before chmod +x. Without this, Windows contributors who have core.autocrlf=true will get a broken isaaclab.sh (bash will choke on \r in the shebang). Add the same line-ending fix before the chmod.

RUN ${ISAACLAB_PATH}/isaaclab.sh -i
Comment on lines +68 to +73
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 COPY . before heavy install step defeats layer caching

COPY . ${ISAACLAB_PATH} copies the entire repository in a single step, directly before the expensive isaaclab.sh -i installation layer (~2 GB+). Any change to any file in the repo (including scripts, tests, or docs) will invalidate that layer and force a full re-install from scratch.

The other Dockerfiles (Dockerfile.base, Dockerfile.curobo) use a selective, two-stage copy to separate rarely-changing config from frequently-changing source code:

# Stage 1: copy only files that drive the install (rarely change)
COPY isaaclab.* environment.yml pyproject.toml ${ISAACLAB_PATH}/
COPY tools/ ${ISAACLAB_PATH}/tools/
COPY source/ ${ISAACLAB_PATH}/source/

RUN find ${ISAACLAB_PATH} -type f -name "*.sh" -exec sed -i 's/\r$//' {} +
RUN chmod +x ${ISAACLAB_PATH}/isaaclab.sh
RUN ${ISAACLAB_PATH}/isaaclab.sh -i

# Stage 2: copy the rest (frequently changes, but cheap to redo)
COPY . ${ISAACLAB_PATH}

This pattern ensures that iterating on scripts, tests, or documentation does not require re-downloading all Python dependencies.


# ---- Singularity/Apptainer compatibility ----
RUN touch /bin/nvidia-smi && \
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Missing pip cache mount: Dockerfile.base uses RUN --mount=type=cache,target=/root/.cache/pip for the install step. Consider:

RUN --mount=type=cache,target=/root/.cache/pip ${ISAACLAB_PATH}/isaaclab.sh -i

Also, for consistency with Dockerfile.base, prefer --install (long form) over -i.

touch /bin/nvidia-debugdump && \
touch /bin/nvidia-persistenced && \
touch /bin/nvidia-cuda-mps-control && \
touch /bin/nvidia-cuda-mps-server && \
touch /etc/localtime && \
mkdir -p /var/run/nvidia-persistenced && \
touch /var/run/nvidia-persistenced/socket

RUN mkdir -p /root/.cache/pip && \
mkdir -p /root/.cache/nvidia/GLCache && \
mkdir -p /root/.nv/ComputeCache

# ---- Build-time verification ----
RUN python -c "\
import isaaclab; \
import isaaclab_newton; \
import newton; \
import warp; \
import torch; \
print(f'torch {torch.__version__}, CUDA available: {torch.cuda.is_available()}'); \
print('All Newton imports OK')"
Comment on lines +89 to +97
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Build-time CUDA check will always report CUDA as unavailable

torch.cuda.is_available() requires access to a GPU at build time. Docker build does not expose GPUs by default (even with --gpus all support, CUDA binaries are not present in the build layer at this stage). This means the print statement will always output:

torch.cuda.is_available(): False

This does not fail the build (since there's no assertion), but it will mislead users scanning build logs into thinking the CUDA setup is incorrect. Consider removing the CUDA availability check from the build-time verification, or adding a comment clarifying that False is expected here and GPU availability is only confirmed at container runtime.


# ---- Shell convenience ----
RUN echo "export ISAACLAB_PATH=${ISAACLAB_PATH}" >> /root/.bashrc && \
echo "export PATH=/opt/isaaclab-venv/bin:\$PATH" >> /root/.bashrc && \
echo "export TZ=$(date +%Z)" >> /root/.bashrc

WORKDIR ${ISAACLAB_PATH}