Skip to content

Commit 4c111d1

Browse files
authored
Change tabs to tab-set for better appearance in dark mode. (#561)
## Summary Switches `.. tabs::` for `.. tab-set::` in the docs to fix readability issues in dark mode. ## Detailed description - For some reason `tab-set` handles dark mode better. - Addresses: [5727965](https://nvbugspro.nvidia.com/bug/5727965) Before: <img width="1350" height="1630" alt="image" src="https://github.com/user-attachments/assets/1d25a745-2ef2-4105-8405-96e01e8b60c8" /> After: <img width="830" height="799" alt="image" src="https://github.com/user-attachments/assets/4dc6652d-b2cf-4b50-9fea-490b75b4498b" />
1 parent 61d3b14 commit 4c111d1

6 files changed

Lines changed: 33 additions & 33 deletions

File tree

docs/pages/concepts/concept_policy_design.rst

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -70,16 +70,16 @@ Usage Examples
7070

7171
**Baseline Testing**
7272

73-
.. tabs::
73+
.. tab-set::
7474

75-
.. tab:: Single GPU
75+
.. tab-item:: Single GPU
7676

7777
.. code-block:: bash
7878
7979
# Zero action policy for environment validation
8080
python isaaclab_arena/evaluation/policy_runner.py --policy_type zero_action --num_steps 1000 kitchen_pick_and_place --object cracker_box
8181
82-
.. tab:: Distribute Multi-GPU
82+
.. tab-item:: Distribute Multi-GPU
8383

8484
.. code-block:: bash
8585
@@ -89,16 +89,16 @@ Usage Examples
8989
9090
**Demonstration Replay**
9191

92-
.. tabs::
92+
.. tab-set::
9393

94-
.. tab:: Single GPU
94+
.. tab-item:: Single GPU
9595

9696
.. code-block:: bash
9797
9898
# Replay recorded demonstrations
9999
python isaaclab_arena/evaluation/policy_runner.py --policy_type replay --replay_file_path demos.h5 kitchen_pick_and_place --object cracker_box
100100
101-
.. tab:: Distribute Multi-GPU
101+
.. tab-item:: Distribute Multi-GPU
102102

103103
.. code-block:: bash
104104
@@ -108,16 +108,16 @@ Usage Examples
108108
109109
**Neural Policy Execution**
110110

111-
.. tabs::
111+
.. tab-set::
112112

113-
.. tab:: Single GPU
113+
.. tab-item:: Single GPU
114114

115115
.. code-block:: bash
116116
117117
# GR00T foundation model deployment
118118
python isaaclab_arena/evaluation/policy_runner.py --policy_type isaaclab_arena_gr00t.policy.gr00t_closedloop_policy.Gr00tClosedloopPolicy --policy_config_yaml_path config.yaml <your_isaaclab_arena_environment>
119119
120-
.. tab:: Distribute Multi-GPU
120+
.. tab-item:: Distribute Multi-GPU
121121

122122
.. code-block:: bash
123123

docs/pages/example_workflows/locomanipulation/step_5_evaluation.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -95,9 +95,9 @@ Step 2: Run Parallel Environments Evaluation
9595

9696
Parallel evaluation of the policy in multiple parallel environments is also supported by the policy runner.
9797

98-
.. tabs::
98+
.. tab-set::
9999

100-
.. tab:: Single GPU Evaluation
100+
.. tab-item:: Single GPU Evaluation
101101

102102
Test the policy in 5 parallel environments with visualization via the GUI run:
103103

@@ -115,7 +115,7 @@ Parallel evaluation of the policy in multiple parallel environments is also supp
115115
--object brown_box \
116116
--embodiment g1_wbc_joint
117117
118-
.. tab:: Distribute Multi-GPU Evaluation
118+
.. tab-item:: Distribute Multi-GPU Evaluation
119119

120120
Test the policy in 5 parallel environments on each GPU with 2 GPUs total run:
121121

docs/pages/example_workflows/sequential_static_manipulation/step_4_policy_training.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -117,9 +117,9 @@ We provide three post-training options:
117117
* Low Hardware Requirements: 1 GPU with 24GB memory
118118

119119

120-
.. tabs::
120+
.. tab-set::
121121

122-
.. tab:: Best Quality
122+
.. tab-item:: Best Quality
123123

124124
Training takes approximately 4-8 hours on 8x L40s GPUs.
125125

@@ -156,7 +156,7 @@ We provide three post-training options:
156156
--embodiment_tag=GR1 \
157157
--color_jitter_params brightness 0.3 contrast 0.4 saturation 0.5 hue 0.08
158158
159-
.. tab:: Low Hardware Requirements
159+
.. tab-item:: Low Hardware Requirements
160160

161161
Training takes approximately 2-3 hours on 1x Ada6000 GPU.
162162

docs/pages/example_workflows/sequential_static_manipulation/step_5_evaluation.rst

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -93,15 +93,15 @@ You should see similar metrics. All of them shall be greater than 0.9, and the n
9393
Note that all these metrics are computed over the entire evaluation process, and are affected by the quality of
9494
post-trained policy, the quality of the dataset, and number of steps in the evaluation.
9595

96-
.. tabs::
96+
.. tab-set::
9797

98-
.. tab:: Best Quality
98+
.. tab-item:: Best Quality
9999

100100
.. code-block:: text
101101
102102
Metrics: Metrics: {'success_rate': 1.0, 'object_moved_rate_subtask_0': 1.0, 'revolute_joint_moved_rate_subtask_1': 1.0, 'subtask_success_rate': [1.0, 1.0], 'num_episodes': 5}
103103
104-
.. tab:: Low Hardware Requirements
104+
.. tab-item:: Low Hardware Requirements
105105

106106
Evaluated with checkpoint-30000, instead of checkpoint-20000 referenced in the policy configuration file.
107107

@@ -114,9 +114,9 @@ Step 2: Run Parallel environments Evaluation
114114

115115
Parallel evaluation of the policy in multiple parallel environments is also supported by the policy runner.
116116

117-
.. tabs::
117+
.. tab-set::
118118

119-
.. tab:: Single GPU Evaluation
119+
.. tab-item:: Single GPU Evaluation
120120

121121
Test the policy in 10 parallel environments with visualization via the GUI run:
122122

@@ -132,7 +132,7 @@ Parallel evaluation of the policy in multiple parallel environments is also supp
132132
--embodiment gr1_joint \
133133
--object ranch_dressing_hope_robolab
134134
135-
.. tab:: Distribute Multi-GPU Evaluation
135+
.. tab-item:: Distribute Multi-GPU Evaluation
136136

137137
Test the policy in 10 parallel environments on each GPU with 2 GPUs total run:
138138

@@ -184,9 +184,9 @@ Step 3: Multi-object Heterogeneous Evaluation
184184

185185
This step demonstrates evaluation of the policy in heterogeneous environments with multiple objects.
186186

187-
.. tabs::
187+
.. tab-set::
188188

189-
.. tab:: Single GPU Evaluation
189+
.. tab-item:: Single GPU Evaluation
190190

191191
Test the policy in 10 parallel environments with visualization via the GUI run:
192192

@@ -203,7 +203,7 @@ This step demonstrates evaluation of the policy in heterogeneous environments wi
203203
--embodiment gr1_joint \
204204
--object_set ketchup_bottle_hope_robolab ranch_dressing_hope_robolab bbq_sauce_bottle_hope_robolab mayonnaise_bottle_hope_robolab
205205
206-
.. tab:: Distribute Multi-GPU Evaluation
206+
.. tab-item:: Distribute Multi-GPU Evaluation
207207

208208
Test the policy in 10 parallel environments on each GPU with 2 GPUs total run:
209209

docs/pages/example_workflows/static_manipulation/step_4_policy_training.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -110,9 +110,9 @@ We provide two post-training options:
110110
* Low Hardware Requirements: 1 GPU with 24GB memory
111111

112112

113-
.. tabs::
113+
.. tab-set::
114114

115-
.. tab:: Best Quality
115+
.. tab-item:: Best Quality
116116

117117
Training takes approximately 4-8 hours on 8x L40s GPUs.
118118

@@ -150,7 +150,7 @@ We provide two post-training options:
150150
--embodiment_tag=GR1 \
151151
--color_jitter_params brightness 0.3 contrast 0.4 saturation 0.5 hue 0.08
152152
153-
.. tab:: Low Hardware Requirements
153+
.. tab-item:: Low Hardware Requirements
154154

155155
Training takes approximately 2-3 hours on 1x Ada6000 GPU.
156156

docs/pages/example_workflows/static_manipulation/step_5_evaluation.rst

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -85,15 +85,15 @@ should be greater than 0.9, and the number of episodes should be in the range of
8585
Note that all these metrics are computed over the entire evaluation process, and are affected by the quality of
8686
post-trained policy, the quality of the dataset, and number of steps in the evaluation.
8787

88-
.. tabs::
88+
.. tab-set::
8989

90-
.. tab:: Best Quality
90+
.. tab-item:: Best Quality
9191

9292
.. code-block:: text
9393
9494
Metrics: {'success_rate': 0.8823529411764706, 'revolute_joint_moved_rate': 1.0, 'num_episodes': 17}
9595
96-
.. tab:: Low Hardware Requirements
96+
.. tab-item:: Low Hardware Requirements
9797

9898
.. code-block:: text
9999
@@ -105,9 +105,9 @@ Step 2: Run Parallel Environments Evaluation
105105

106106
Parallel evaluation of the policy in multiple parallel environments is also supported by the policy runner.
107107

108-
.. tabs::
108+
.. tab-set::
109109

110-
.. tab:: Single GPU Evaluation
110+
.. tab-item:: Single GPU Evaluation
111111

112112
Test the policy in 10 parallel environments with visualization via the GUI run:
113113

@@ -123,7 +123,7 @@ Parallel evaluation of the policy in multiple parallel environments is also supp
123123
gr1_open_microwave \
124124
--embodiment gr1_joint
125125
126-
.. tab:: Distribute Multi-GPU Evaluation
126+
.. tab-item:: Distribute Multi-GPU Evaluation
127127

128128
Test the policy in 10 parallel environments on each GPU with 2 GPUs total run:
129129

0 commit comments

Comments
 (0)