This guide covers how to register RoboLab tasks as runnable Gymnasium environments with your specific robot, observations, actions, and simulation settings. You do not need to modify RoboLab — registration can be done from your own repository.
You need a custom registration if:
- You're using a different robot (not DROID)
- You need a different action space (e.g., end-effector control instead of joint position) — see Robots
- You need custom observations (e.g., depth images, different camera placements)
- You want different lighting, backgrounds, or simulation parameters (dt, decimation, render interval)
If you're using DROID with joint-position actions, RoboLab ships a ready-to-use registration. You can skip to Evaluating a New Policy, which shows how to use the built-in registration directly.
Define which sensor data the simulator should provide to your policy. This uses IsaacLab's observation manager.
For DROID users: RoboLab ships a proprioception observation config and a helper that generates an image observation config from any list of camera configs. If these work for your policy, you can skip this step and import/build them directly in Step 2:
# Image observations — generated from a list of camera configs, so the obs terms
# automatically match whatever cameras you attach to the scene.
from robolab.core.observations.observation_utils import generate_image_obs_from_cameras
from robolab.registrations.droid_jointpos.camera_presets import WRIST_LEFT
ImageObsCfg = generate_image_obs_from_cameras(WRIST_LEFT)
# Proprioception (joint positions, gripper state, EEF pose, etc.)
from robolab.robots.droid import ProprioceptionObservationCfgPresets live in robolab/registrations/droid_jointpos/camera_presets.py: WRIST, WRIST_LEFT, WRIST_RIGHT, WRIST_LEFT_RIGHT, WRIST_LEFT_RIGHT_HEAD, LEFT_RIGHT. Pick one or pass your own list. Viewport-only cameras (used for recorded video, not policy input) are attached separately inside the registration function and do not belong in these presets.
If you need different observations (e.g., different cameras, depth data, or custom proprioception), define your own:
# my_policy/observations.py
import isaaclab.envs.mdp as mdp
from isaaclab.managers import ObservationGroupCfg as ObsGroup
from isaaclab.managers import ObservationTermCfg as ObsTerm
from isaaclab.managers import SceneEntityCfg
from isaaclab.utils import configclass
@configclass
class ImageObsCfg(ObsGroup):
"""Camera observations for your policy."""
over_shoulder_left_camera = ObsTerm(
func=mdp.observations.image,
params={"sensor_cfg": SceneEntityCfg("over_shoulder_left_camera"), "data_type": "rgb", "normalize": False},
)
wrist_cam = ObsTerm(
func=mdp.observations.image,
params={"sensor_cfg": SceneEntityCfg("wrist_cam"), "data_type": "rgb", "normalize": False},
)
def __post_init__(self) -> None:
self.enable_corruption = False
self.concatenate_terms = FalseThe camera names (over_shoulder_left_camera, wrist_cam) are mapped from the camera configurations you provide in Step 2. You can mix and match — use the built-in ProprioceptionObservationCfg from robolab.robots.droid with your own custom ImageObsCfg, or vice versa.
When to hand-roll an
ImageObsCfginstead of generating it:generate_image_obs_from_cameras(...)produces a uniform observation term per camera (RGB, no noise, no normalization). If you want per-camera customization — different data types (depth), noise corruption, normalization flags, or reading oneTiledCameraCfgprim via multiple keys — define the class by hand as shown above.
Create a function that combines the benchmark tasks with your robot, observations, actions, and simulation settings:
# my_policy/register_envs.py
from robolab.constants import DEFAULT_TASK_SUBFOLDERS, TASK_DIR
def register_envs(task_dirs=DEFAULT_TASK_SUBFOLDERS, task=None, cameras=None):
from robolab.core.environments.factory import auto_discover_and_create_cfgs
from robolab.core.observations.observation_utils import generate_image_obs_from_cameras, generate_obs_cfg
from robolab.registrations.droid_jointpos.camera_presets import WRIST_LEFT
from robolab.robots.droid import (
DroidCfg,
DroidJointPositionActionCfg,
ProprioceptionObservationCfg,
contact_gripper,
)
from robolab.variations.backgrounds import HomeOfficeBackgroundCfg
from robolab.variations.camera import EgocentricMirroredCameraCfg
from robolab.variations.lighting import SphereLightCfg
if cameras is None:
cameras = WRIST_LEFT
# The same list feeds both the scene and the image observation group.
ImageObsCfg = generate_image_obs_from_cameras(cameras)
# Or roll your own: `from my_policy.observations import ImageObsCfg` (see Step 1).
# Build the full observation config
ViewportCameraCfg = generate_image_obs_from_cameras([EgocentricMirroredCameraCfg])
ObservationCfg = generate_obs_cfg({
"image_obs": ImageObsCfg(),
"proprio_obs": ProprioceptionObservationCfg(),
"viewport_cam": ViewportCameraCfg(),
})
auto_discover_and_create_cfgs(
task_dir=TASK_DIR,
task_subdirs=task_dirs,
tasks=task, # None → discover everything in task_subdirs; str/list → only those
pattern="*.py",
env_prefix="",
env_postfix="",
observations_cfg=ObservationCfg(),
actions_cfg=DroidJointPositionActionCfg(),
robot_cfg=DroidCfg,
# Policy-observed cameras + the viewport camera (attached for video recording only).
camera_cfg=[*cameras, EgocentricMirroredCameraCfg],
lighting_cfg=SphereLightCfg,
background_cfg=HomeOfficeBackgroundCfg,
contact_gripper=contact_gripper,
dt=1 / (60 * 2), # Physics timestep
render_interval=8, # Render every N physics steps
decimation=8, # Action repeat
seed=1,
)You can verify your registration works by calling print_env_table() after registration:
from robolab.core.environments.factory import print_env_table
register_envs()
print_env_table()This prints a table of all registered environments with their configurations.
Note: It is recommended that you check your environments are created correctly before running any policy. You can also run
scripts/check_registered_envs.pywith your registration function.
The examples above register RoboLab's built-in benchmark tasks (from TASK_DIR). If you've created your own tasks (see Tasks), you can register them in the same function.
Pass the full file path via tasks=:
def register_envs(task_dirs=DEFAULT_TASK_SUBFOLDERS, task=None):
...
# Register a specific task by absolute path
auto_discover_and_create_cfgs(
task_dir=TASK_DIR,
tasks="/path/to/my_tasks/tasks/my_task.py",
env_prefix="", env_postfix="",
# ... same observation/robot/camera/lighting/etc kwargs as above
)Point task_dir at your tasks folder and omit tasks:
def register_envs(task_dirs=DEFAULT_TASK_SUBFOLDERS, task=None):
...
# Register all tasks from your own directory
auto_discover_and_create_cfgs(
task_dir="/path/to/my_tasks/tasks",
task_subdirs=[""],
pattern="*.py",
env_prefix="", env_postfix="Custom",
# ... same observation/robot/camera/lighting/etc kwargs as above
)You can register both benchmark tasks and your own tasks in the same function — just call auto_discover_and_create_cfgs() multiple times (with different task_dir / task_subdirs / tasks). Use env_postfix or add_tags to distinguish them.
For complete working examples inside the RoboLab repo:
robolab/registrations/example/auto_env_registration.py— Basic registration examplerobolab/registrations/droid_jointpos/auto_env_registrations.py— Full DROID robot registration
| Name | Description | Example |
|---|---|---|
| Task Name | The base task class name (from the Task class). Groups task variants together. |
BananaInBowlTask |
| Environment Name | The environment name including the task, robot, and any configurations. Each env name uniquely identifies a task and a runnable env. This is the name you query in any eval script. | BananaInBowlTaskHomeOffice |
| Config Class Name | The generated configuration class. Always <env_name>EnvCfg. |
BananaInBowlTaskHomeOfficeEnvCfg |
| Tag | A group of related environments for easy querying. | pick_place, all |
env_name = <env_prefix> + <TaskClassName> + <env_postfix>
For example, with env_prefix="" and env_postfix="HomeOffice":
- Task class:
BananaInBowlTask - Environment name:
BananaInBowlTaskHomeOffice - Config class:
BananaInBowlTaskHomeOfficeEnvCfg
A single Task Name can have multiple Environment Names (variants):
Task Name: BananaInBowlTask
├── Environment: BananaInBowlTaskHomeOffice (HomeOffice background)
├── Environment: BananaInBowlTaskWarehouse (Warehouse background)
└── Environment: BananaInBowlTaskBilliardHall (BilliardHall background)
Use get_envs() to query environments. Only one argument is allowed at a time:
from robolab.core.environments.factory import get_envs
# Query by task name — returns all variants for this task
envs = get_envs(task="BananaInBowlTask")
# Returns: ["BananaInBowlTaskHomeOffice", "BananaInBowlTaskWarehouse", ...]
# Query by exact environment name — returns exactly one environment
envs = get_envs(env="BananaInBowlTaskHomeOffice")
# Returns: ["BananaInBowlTaskHomeOffice"]
# Query by tag — returns all environments with the tag
envs = get_envs(tag="pick_place")
# Returns: ["BananaInBowlTaskHomeOffice", "AppleInBowlTaskHomeOffice", ...]
# Get all environments
envs = get_envs()
# Returns: all environmentsConvenience functions for single queries:
from robolab.core.environments.factory import get_envs_by_task, get_envs_by_tag
envs = get_envs_by_task("BananaInBowlTask")
envs = get_envs_by_tag("pick_place")Every environment is automatically added to the "all" tag. Task attributes (if defined on the Task class) are also automatically added as tags. You can add custom tags at registration time:
auto_discover_and_create_cfgs(
add_tags=["pick_place", "easy_tasks"],
...
)Policy-specific parameters:
observations_cfg— Custom observations configuration (see Cameras — Wiring Cameras to Observations)actions_cfg— Custom actions configuration (see Robots for built-in action configs)robot_cfg— Robot articulation, actuators, and attached sensors (see Robots)decimation— Decimation factor (action repeat)dt— Physics timesteprender_interval— Render every N physics steps
Scene parameters (each can live in your own repository — no need to modify RoboLab):
camera_cfg— Scene camera configuration, single config or list (see Cameras)lighting_cfg— Lighting configuration, single config or list (see Lighting)background_cfg— HDR/EXR dome light background (see Backgrounds)
Once your environments are registered:
- Run evaluation — Follow the Evaluating a New Policy guide to implement your inference client and run evaluation.
- Manage a task library — If you are building a collection of tasks, see Task Libraries for generating metadata, README tables, and statistics for your tasks.