Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 16 additions & 16 deletions docs/source/features/hydra.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,28 +25,28 @@ As a result, training with hydra arguments can be run with the following syntax:

.. code-block:: shell

python scripts/reinforcement_learning/rsl_rl/train.py --task=Isaac-Cartpole-v0 --headless env.actions.joint_effort.scale=10.0 agent.seed=2024
./isaaclab.sh train --library rsl_rl --task=Isaac-Cartpole-v0 --headless env.actions.joint_effort.scale=10.0 agent.seed=2024

.. tab-item:: rl_games
:sync: rl_games

.. code-block:: shell

python scripts/reinforcement_learning/rl_games/train.py --task=Isaac-Cartpole-v0 --headless env.actions.joint_effort.scale=10.0 agent.params.seed=2024
./isaaclab.sh train --library rl_games --task=Isaac-Cartpole-v0 --headless env.actions.joint_effort.scale=10.0 agent.params.seed=2024

.. tab-item:: skrl
:sync: skrl

.. code-block:: shell

python scripts/reinforcement_learning/skrl/train.py --task=Isaac-Cartpole-v0 --headless env.actions.joint_effort.scale=10.0 agent.seed=2024
./isaaclab.sh train --library skrl --task=Isaac-Cartpole-v0 --headless env.actions.joint_effort.scale=10.0 agent.seed=2024

.. tab-item:: sb3
:sync: sb3

.. code-block:: shell

python scripts/reinforcement_learning/sb3/train.py --task=Isaac-Cartpole-v0 --headless env.actions.joint_effort.scale=10.0 agent.seed=2024
./isaaclab.sh train --library sb3 --task=Isaac-Cartpole-v0 --headless env.actions.joint_effort.scale=10.0 agent.seed=2024

The above command will run the training script with the task ``Isaac-Cartpole-v0`` in headless mode, and set the
``env.actions.joint_effort.scale`` parameter to 10.0 and the ``agent.seed`` parameter to 2024.
Expand Down Expand Up @@ -216,7 +216,7 @@ override is given:
.. code-block:: bash

# Use Newton physics backend
python train.py --task=Isaac-Reach-Franka-v0 env.physics=newton_mjwarp
./isaaclab.sh train --library rsl_rl --task=Isaac-Reach-Franka-v0 env.physics=newton_mjwarp

The ``default`` field can be set to ``None`` to make an optional feature that is
disabled unless explicitly selected:
Expand All @@ -236,10 +236,10 @@ disabled unless explicitly selected:
.. code-block:: bash

# camera is None -- no camera overhead
python train.py --task=Isaac-Reach-Franka-v0
./isaaclab.sh train --library rsl_rl --task=Isaac-Reach-Franka-v0

# activate camera with the "large" preset
python train.py --task=Isaac-Reach-Franka-v0 env.scene.camera=large
./isaaclab.sh train --library rsl_rl --task=Isaac-Reach-Franka-v0 env.scene.camera=large


.. _hydra-backend-solver-presets:
Expand Down Expand Up @@ -299,10 +299,10 @@ is currently beta.
.. code-block:: bash

# Select the Kamino solver preset everywhere it is defined
python train.py --task=Isaac-Cartpole-v0 presets=newton_kamino
./isaaclab.sh train --library rsl_rl --task=Isaac-Cartpole-v0 presets=newton_kamino

# Select the Kamino solver preset for a specific physics config path
python train.py --task=Isaac-Cartpole-v0 env.sim.physics=newton_kamino
./isaaclab.sh train --library rsl_rl --task=Isaac-Cartpole-v0 env.sim.physics=newton_kamino

The ``newton_kamino`` preset is currently defined for ``Isaac-Cartpole-Direct-v0``,
``Isaac-Ant-Direct-v0``, ``Isaac-Cartpole-v0``, and ``Isaac-Ant-v0``. Passing
Expand Down Expand Up @@ -352,7 +352,7 @@ including inside dict-valued fields such as ``actuators``:
.. code-block:: bash

# Select MJWarp preset globally -- sets armature to 0.01
python train.py --task=Isaac-Velocity-Rough-Anymal-C-v0 presets=newton_mjwarp
./isaaclab.sh train --library rsl_rl --task=Isaac-Velocity-Rough-Anymal-C-v0 presets=newton_mjwarp


Using Presets
Expand All @@ -362,29 +362,29 @@ Using Presets

.. code-block:: bash

python train.py --task=Isaac-Velocity-Rough-Anymal-C-v0 \
./isaaclab.sh train --library rsl_rl --task=Isaac-Velocity-Rough-Anymal-C-v0 \
env.events=newton_mjwarp

**Global presets** -- apply the same preset name everywhere it exists:

.. code-block:: bash

# Apply "newton_mjwarp" preset to all configs that define it
python train.py --task=Isaac-Velocity-Rough-Anymal-C-v0 \
./isaaclab.sh train --library rsl_rl --task=Isaac-Velocity-Rough-Anymal-C-v0 \
presets=newton_mjwarp

**Multiple global presets** -- apply several non-conflicting presets:

.. code-block:: bash

python train.py --task=Isaac-Velocity-Rough-Anymal-C-v0 \
./isaaclab.sh train --library rsl_rl --task=Isaac-Velocity-Rough-Anymal-C-v0 \
presets=newton_mjwarp,inference

**Combined** -- global presets + scalar overrides:

.. code-block:: bash

python train.py --task=Isaac-Velocity-Rough-Anymal-C-v0 \
./isaaclab.sh train --library rsl_rl --task=Isaac-Velocity-Rough-Anymal-C-v0 \
presets=newton_mjwarp \
env.sim.dt=0.002

Expand Down Expand Up @@ -419,10 +419,10 @@ actuator armature is set to ``0.01``.
.. code-block:: bash

# Default (PhysX events, armature=0.0)
python train.py --task=Isaac-Velocity-Rough-Anymal-C-v0
./isaaclab.sh train --library rsl_rl --task=Isaac-Velocity-Rough-Anymal-C-v0

# MJWarp (Newton events, armature=0.01)
python train.py --task=Isaac-Velocity-Rough-Anymal-C-v0 presets=newton_mjwarp
./isaaclab.sh train --library rsl_rl --task=Isaac-Velocity-Rough-Anymal-C-v0 presets=newton_mjwarp


Summary
Expand Down
24 changes: 12 additions & 12 deletions docs/source/features/multi_gpu.rst
Original file line number Diff line number Diff line change
Expand Up @@ -96,14 +96,14 @@ To train with multiple GPUs, use the following command, where ``--nproc_per_node

.. code-block:: shell

python -m torch.distributed.run --nnodes=1 --nproc_per_node=2 scripts/reinforcement_learning/rl_games/train.py --task=Isaac-Cartpole-v0 --headless --distributed
python -m torch.distributed.run --nnodes=1 --nproc_per_node=2 scripts/reinforcement_learning/train.py --library rl_games --task=Isaac-Cartpole-v0 --headless --distributed

.. tab-item:: rsl_rl
:sync: rsl_rl

.. code-block:: shell

python -m torch.distributed.run --nnodes=1 --nproc_per_node=2 scripts/reinforcement_learning/rsl_rl/train.py --task=Isaac-Cartpole-v0 --headless --distributed
python -m torch.distributed.run --nnodes=1 --nproc_per_node=2 scripts/reinforcement_learning/train.py --library rsl_rl --task=Isaac-Cartpole-v0 --headless --distributed

.. tab-item:: skrl
:sync: skrl
Expand All @@ -115,14 +115,14 @@ To train with multiple GPUs, use the following command, where ``--nproc_per_node

.. code-block:: shell

python -m torch.distributed.run --nnodes=1 --nproc_per_node=2 scripts/reinforcement_learning/skrl/train.py --task=Isaac-Cartpole-v0 --headless --distributed
python -m torch.distributed.run --nnodes=1 --nproc_per_node=2 scripts/reinforcement_learning/train.py --library skrl --task=Isaac-Cartpole-v0 --headless --distributed

.. tab-item:: JAX
:sync: jax

.. code-block:: shell

python -m skrl.utils.distributed.jax --nnodes=1 --nproc_per_node=2 scripts/reinforcement_learning/skrl/train.py --task=Isaac-Cartpole-v0 --headless --distributed --ml_framework jax
python -m skrl.utils.distributed.jax --nnodes=1 --nproc_per_node=2 scripts/reinforcement_learning/train.py --library skrl --task=Isaac-Cartpole-v0 --headless --distributed --ml_framework jax

.. _multi-gpu-nccl-troubleshooting:

Expand Down Expand Up @@ -171,14 +171,14 @@ For the master node, use the following command, where ``--nproc_per_node`` repre

.. code-block:: shell

python -m torch.distributed.run --nproc_per_node=2 --nnodes=2 --node_rank=0 --master_addr=<ip_of_master> --master_port=5555 scripts/reinforcement_learning/rl_games/train.py --task=Isaac-Cartpole-v0 --headless --distributed
python -m torch.distributed.run --nproc_per_node=2 --nnodes=2 --node_rank=0 --master_addr=<ip_of_master> --master_port=5555 scripts/reinforcement_learning/train.py --library rl_games --task=Isaac-Cartpole-v0 --headless --distributed

.. tab-item:: rsl_rl
:sync: rsl_rl

.. code-block:: shell

python -m torch.distributed.run --nproc_per_node=2 --nnodes=2 --node_rank=0 --master_addr=<ip_of_master> --master_port=5555 scripts/reinforcement_learning/rsl_rl/train.py --task=Isaac-Cartpole-v0 --headless --distributed
python -m torch.distributed.run --nproc_per_node=2 --nnodes=2 --node_rank=0 --master_addr=<ip_of_master> --master_port=5555 scripts/reinforcement_learning/train.py --library rsl_rl --task=Isaac-Cartpole-v0 --headless --distributed

.. tab-item:: skrl
:sync: skrl
Expand All @@ -190,14 +190,14 @@ For the master node, use the following command, where ``--nproc_per_node`` repre

.. code-block:: shell

python -m torch.distributed.run --nproc_per_node=2 --nnodes=2 --node_rank=0 --master_addr=<ip_of_master> --master_port=5555 scripts/reinforcement_learning/skrl/train.py --task=Isaac-Cartpole-v0 --headless --distributed
python -m torch.distributed.run --nproc_per_node=2 --nnodes=2 --node_rank=0 --master_addr=<ip_of_master> --master_port=5555 scripts/reinforcement_learning/train.py --library skrl --task=Isaac-Cartpole-v0 --headless --distributed

.. tab-item:: JAX
:sync: jax

.. code-block:: shell

python -m skrl.utils.distributed.jax --nproc_per_node=2 --nnodes=2 --node_rank=0 --coordinator_address=ip_of_master_machine:5555 scripts/reinforcement_learning/skrl/train.py --task=Isaac-Cartpole-v0 --headless --distributed --ml_framework jax
python -m skrl.utils.distributed.jax --nproc_per_node=2 --nnodes=2 --node_rank=0 --coordinator_address=ip_of_master_machine:5555 scripts/reinforcement_learning/train.py --library skrl --task=Isaac-Cartpole-v0 --headless --distributed --ml_framework jax

Note that the port (``5555``) can be replaced with any other available port.

Expand All @@ -211,14 +211,14 @@ For non-master nodes, use the following command, replacing ``--node_rank`` with

.. code-block:: shell

python -m torch.distributed.run --nproc_per_node=2 --nnodes=2 --node_rank=1 --master_addr=<ip_of_master> --master_port=5555 scripts/reinforcement_learning/rl_games/train.py --task=Isaac-Cartpole-v0 --headless --distributed
python -m torch.distributed.run --nproc_per_node=2 --nnodes=2 --node_rank=1 --master_addr=<ip_of_master> --master_port=5555 scripts/reinforcement_learning/train.py --library rl_games --task=Isaac-Cartpole-v0 --headless --distributed

.. tab-item:: rsl_rl
:sync: rsl_rl

.. code-block:: shell

python -m torch.distributed.run --nproc_per_node=2 --nnodes=2 --node_rank=1 --master_addr=<ip_of_master> --master_port=5555 scripts/reinforcement_learning/rsl_rl/train.py --task=Isaac-Cartpole-v0 --headless --distributed
python -m torch.distributed.run --nproc_per_node=2 --nnodes=2 --node_rank=1 --master_addr=<ip_of_master> --master_port=5555 scripts/reinforcement_learning/train.py --library rsl_rl --task=Isaac-Cartpole-v0 --headless --distributed

.. tab-item:: skrl
:sync: skrl
Expand All @@ -230,14 +230,14 @@ For non-master nodes, use the following command, replacing ``--node_rank`` with

.. code-block:: shell

python -m torch.distributed.run --nproc_per_node=2 --nnodes=2 --node_rank=1 --master_addr=<ip_of_master> --master_port=5555 scripts/reinforcement_learning/skrl/train.py --task=Isaac-Cartpole-v0 --headless --distributed
python -m torch.distributed.run --nproc_per_node=2 --nnodes=2 --node_rank=1 --master_addr=<ip_of_master> --master_port=5555 scripts/reinforcement_learning/train.py --library skrl --task=Isaac-Cartpole-v0 --headless --distributed

.. tab-item:: JAX
:sync: jax

.. code-block:: shell

python -m skrl.utils.distributed.jax --nproc_per_node=2 --nnodes=2 --node_rank=1 --coordinator_address=ip_of_master_machine:5555 scripts/reinforcement_learning/skrl/train.py --task=Isaac-Cartpole-v0 --headless --distributed --ml_framework jax
python -m skrl.utils.distributed.jax --nproc_per_node=2 --nnodes=2 --node_rank=1 --coordinator_address=ip_of_master_machine:5555 scripts/reinforcement_learning/train.py --library skrl --task=Isaac-Cartpole-v0 --headless --distributed --ml_framework jax

For more details on multi-node training with PyTorch, please visit the
`PyTorch documentation <https://pytorch.org/tutorials/intermediate/ddp_series_multinode.html>`_.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/features/population_based_training.rst
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ Launch *N* workers, where *n* indicates each worker index:
.. code-block:: bash

# Run this once per worker (n = 0..N-1), all pointing to the same directory/workspace
./isaaclab.sh -p scripts/reinforcement_learning/rl_games/train.py \
./isaaclab.sh train --library rl_games \
--seed=<n> \
--task=Isaac-Repose-Cube-Shadow-Direct-v0 \
--num_envs=8192 \
Expand Down
2 changes: 1 addition & 1 deletion docs/source/features/ray.rst
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ In a different terminal, run the following.
--cfg_file scripts/reinforcement_learning/ray/hyperparameter_tuning/vision_cartpole_cfg.py \
--cfg_class CartpoleTheiaJobCfg \
--run_mode local \
--workflow scripts/reinforcement_learning/rl_games/train.py \
--workflow scripts/reinforcement_learning/train.py --library rl_games \
--num_workers_per_node <NUMBER_OF_GPUS_IN_COMPUTER>


Expand Down
10 changes: 5 additions & 5 deletions docs/source/features/visualization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -63,20 +63,20 @@ Launch visualizers from the command line with ``--visualizer`` (or ``--viz`` ali
.. code-block:: bash

# Launch all visualizers (comma-delimited list, no spaces)
python scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Cartpole-v0 --viz kit,newton,rerun
./isaaclab.sh train --library rsl_rl --task Isaac-Cartpole-v0 --viz kit,newton,rerun

# Launch only the Newton visualizer
python scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Cartpole-v0 --viz newton
./isaaclab.sh train --library rsl_rl --task Isaac-Cartpole-v0 --viz newton

# Launch the Viser web-based visualizer
python scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Cartpole-v0 --viz viser
./isaaclab.sh train --library rsl_rl --task Isaac-Cartpole-v0 --viz viser


To run in headless mode, omit the ``--viz`` argument:

.. code-block:: bash

python scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Cartpole-v0
./isaaclab.sh train --library rsl_rl --task Isaac-Cartpole-v0

.. note::

Expand Down Expand Up @@ -491,7 +491,7 @@ the num of environments can be overwritten and decreased using ``--num_envs``:

.. code-block:: bash

python scripts/reinforcement_learning/rsl_rl/train.py --task Isaac-Cartpole-v0 --viz rerun --num_envs 512
./isaaclab.sh train --library rsl_rl --task Isaac-Cartpole-v0 --viz rerun --num_envs 512


**Rerun Visualizer FPS Control**
Expand Down
2 changes: 1 addition & 1 deletion docs/source/how-to/profile_with_nsys.rst
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ The following command shows how to capture a profile for the ``Isaac-Cartpole-v0
-t nvtx,cuda \
--python-functions-trace=scripts/benchmarks/nsys_trace.json \
-o my_profile \
./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py \
./isaaclab.sh train --library rsl_rl \
--task=Isaac-Cartpole-v0 \
--headless \
--max_iterations=3
Expand Down
2 changes: 1 addition & 1 deletion docs/source/how-to/record_video.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Example usage:

.. code-block:: shell

python scripts/reinforcement_learning/rl_games/train.py --task=Isaac-Cartpole-v0 --headless --video --video_length 100 --video_interval 500
./isaaclab.sh train --library rl_games --task=Isaac-Cartpole-v0 --headless --video --video_length 100 --video_interval 500


The recorded videos will be saved in the same directory as the training checkpoints, under
Expand Down
4 changes: 2 additions & 2 deletions docs/source/migration/migrating_from_isaacgymenvs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -916,7 +916,7 @@ To launch a training in Isaac Lab, use the command:

.. code-block:: bash

python scripts/reinforcement_learning/rl_games/train.py --task=Isaac-Cartpole-Direct-v0 --headless
./isaaclab.sh train --library rl_games --task=Isaac-Cartpole-Direct-v0 --headless

Launching Inferencing
~~~~~~~~~~~~~~~~~~~~~
Expand All @@ -925,7 +925,7 @@ To launch inferencing in Isaac Lab, use the command:

.. code-block:: bash

python scripts/reinforcement_learning/rl_games/play.py --task=Isaac-Cartpole-Direct-v0 --num_envs=25 --checkpoint=<path/to/checkpoint>
./isaaclab.sh play --library rl_games --task=Isaac-Cartpole-Direct-v0 --num_envs=25 --checkpoint=<path/to/checkpoint>


Additional Resources
Expand Down
4 changes: 2 additions & 2 deletions docs/source/migration/migrating_from_omniisaacgymenvs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -983,7 +983,7 @@ To launch a training in Isaac Lab, use the command:

.. code-block:: bash

python scripts/reinforcement_learning/rl_games/train.py --task=Isaac-Cartpole-Direct-v0 --headless
./isaaclab.sh train --library rl_games --task=Isaac-Cartpole-Direct-v0 --headless

Launching Inferencing
~~~~~~~~~~~~~~~~~~~~~
Expand All @@ -992,7 +992,7 @@ To launch inferencing in Isaac Lab, use the command:

.. code-block:: bash

python scripts/reinforcement_learning/rl_games/play.py --task=Isaac-Cartpole-Direct-v0 --num_envs=25 --checkpoint=<path/to/checkpoint>
./isaaclab.sh play --library rl_games --task=Isaac-Cartpole-Direct-v0 --num_envs=25 --checkpoint=<path/to/checkpoint>


.. _`OmniIsaacGymEnvs`: https://github.com/isaac-sim/OmniIsaacGymEnvs
Expand Down
4 changes: 2 additions & 2 deletions docs/source/migration/migrating_to_isaaclab_3-0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -444,10 +444,10 @@ Pass ``presets=newton_mjwarp`` (or ``presets=physx``) on the CLI to swap the ent
.. code-block:: bash

# Run with Newton backend
python train.py task=Isaac-Franka-Cabinet-v0 presets=newton_mjwarp
./isaaclab.sh train --library rsl_rl --task=Isaac-Franka-Cabinet-v0 presets=newton_mjwarp

# Run with default (PhysX) backend
python train.py task=Isaac-Franka-Cabinet-v0
./isaaclab.sh train --library rsl_rl --task=Isaac-Franka-Cabinet-v0

Adding Multi-Backend Support to an Environment
-----------------------------------------------
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -161,10 +161,10 @@ Users then select the MJWarp Newton preset at the command line:
.. code-block:: bash

# Default (PhysX)
python train.py --task Isaac-Cartpole-v0
./isaaclab.sh train --library rsl_rl --task Isaac-Cartpole-v0

# MJWarp (Newton backend)
python train.py --task Isaac-Cartpole-v0 presets=newton_mjwarp
./isaaclab.sh train --library rsl_rl --task Isaac-Cartpole-v0 presets=newton_mjwarp

The Physics Manager
-------------------
Expand Down
8 changes: 4 additions & 4 deletions docs/source/overview/core-concepts/sensors/camera.rst
Original file line number Diff line number Diff line change
Expand Up @@ -149,13 +149,13 @@ The active preset is selected at launch via the ``presets=`` CLI argument:
.. code-block:: bash

# Use Newton Warp renderer
python train.py task=Isaac-Cartpole-RGB-Camera-Direct-v0 presets=newton_renderer
./isaaclab.sh train --library rsl_rl --task=Isaac-Cartpole-RGB-Camera-Direct-v0 presets=newton_renderer

# Use OVRTX renderer
python train.py task=Isaac-Cartpole-RGB-Camera-Direct-v0 presets=ovrtx_renderer
./isaaclab.sh train --library rsl_rl --task=Isaac-Cartpole-RGB-Camera-Direct-v0 presets=ovrtx_renderer

# Use default (Isaac RTX)
python train.py task=Isaac-Cartpole-RGB-Camera-Direct-v0
./isaaclab.sh train --library rsl_rl --task=Isaac-Cartpole-RGB-Camera-Direct-v0


Accessing camera data
Expand All @@ -173,7 +173,7 @@ When using the RTX renderer, add ``--enable_cameras`` when launching:

.. code-block:: shell

python scripts/reinforcement_learning/rl_games/train.py \
./isaaclab.sh train --library rl_games \
--task=Isaac-Cartpole-RGB-Camera-Direct-v0 --headless --enable_cameras


Expand Down
Loading
Loading