Skip to content

Commit d461c5d

Browse files
committed
Add docs for resuming for lift object
1 parent 61d3b14 commit d461c5d

1 file changed

Lines changed: 26 additions & 1 deletion

File tree

docs/pages/example_workflows/reinforcement_learning/step_2_policy_training.rst

Lines changed: 26 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,6 +101,31 @@ During training, each iteration prints a summary to the console:
101101
ETA: 00:00:49
102102
103103
104+
Resuming from a Checkpoint
105+
^^^^^^^^^^^^^^^^^^^^^^^^^^
106+
107+
To resume training from a previously saved checkpoint, use the ``--resume`` flag
108+
together with ``--load_run`` (run folder name) and ``--checkpoint`` (model filename).
109+
Both arguments are optional — when omitted, the most recent run and latest checkpoint
110+
are used automatically.
111+
112+
.. code-block:: bash
113+
114+
python submodules/IsaacLab/scripts/reinforcement_learning/rsl_rl/train.py \
115+
--external_callback isaaclab_arena.environments.isaaclab_interop.environment_registration_callback \
116+
--task lift_object \
117+
--rl_training_mode \
118+
--num_envs 4096 \
119+
--max_iterations 4000 \
120+
--resume \
121+
--load_run <timestamp> \
122+
--checkpoint model_1999.pt
123+
124+
Replace ``<timestamp>`` with the run folder name under ``logs/rsl_rl/generic_experiment/``.
125+
If ``--load_run`` is omitted, the latest run is selected. If ``--checkpoint`` is omitted,
126+
the latest checkpoint in that run is loaded.
127+
128+
104129
Multi-GPU Training
105130
^^^^^^^^^^^^^^^^^^
106131

@@ -112,7 +137,7 @@ Add ``--distributed`` to spread environments across all available GPUs:
112137
--external_callback isaaclab_arena.environments.isaaclab_interop.environment_registration_callback \
113138
--task lift_object \
114139
--rl_training_mode \
115-
--num_envs 4096\
140+
--num_envs 4096 \
116141
--max_iterations 2000 \
117142
--distributed
118143

0 commit comments

Comments
 (0)