You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+21-18Lines changed: 21 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -231,8 +231,26 @@ Below are selected community-driven deployment guides and solution write-ups, wh
231
231
232
232
## 🔗 Citation
233
233
234
-
If you find our work helpful, please cite:
234
+
If you find our InternVLA-N1 (Dual System) model helpful, please cite our ICLR paper and previous technical report:
235
+
```bibtex
236
+
@misc{wei2025groundslowfastdualsystem,
237
+
title={Ground Slow, Move Fast: A Dual-System Foundation Model for Generalizable Vision-and-Language Navigation},
238
+
author={Meng Wei and Chenyang Wan and Jiaqi Peng and Xiqian Yu and Yuqiang Yang and Delin Feng and Wenzhe Cai and Chenming Zhu and Tai Wang and Jiangmiao Pang and Xihui Liu},
239
+
year={2025},
240
+
eprint={2512.08186},
241
+
archivePrefix={arXiv},
242
+
primaryClass={cs.RO},
243
+
url={https://arxiv.org/abs/2512.08186},
244
+
}
245
+
@misc{internvla-n1,
246
+
title = {{InternVLA-N1: An} Open Dual-System Navigation Foundation Model with Learned Latent Plans},
247
+
author = {InternNav Team},
248
+
year = {2025},
249
+
booktitle={arXiv},
250
+
}
251
+
```
235
252
253
+
If you use this InternNav codebase to develop your method, please cite our codebase:
236
254
```bibtex
237
255
@misc{internnav2025,
238
256
title = {{InternNav: InternRobotics'} open platform for building generalized navigation foundation models},
@@ -242,17 +260,11 @@ If you find our work helpful, please cite:
242
260
}
243
261
```
244
262
245
-
If you use the specific pretrained models and benchmarks, please kindly cite the original papers involved in our work. Related BibTex entries of our papers are provided below.
246
263
247
-
<details><summary>Related Work BibTex</summary>
264
+
<details><summary>If you use the specific pretrained models and benchmarks, please kindly cite the original papers below.</summary>
248
265
249
266
```BibTex
250
-
@misc{internvla-n1,
251
-
title = {{InternVLA-N1: An} Open Dual-System Navigation Foundation Model with Learned Latent Plans},
252
-
author = {InternNav Team},
253
-
year = {2025},
254
-
booktitle={arXiv},
255
-
}
267
+
256
268
@inproceedings{vlnpe,
257
269
title={Rethinking the Embodied Gap in Vision-and-Language Navigation: A Holistic Study of Physical and Visual Disparities},
258
270
author={Wang, Liuyi and Xia, Xinyuan and Zhao, Hui and Wang, Hanqing and Wang, Tai and Chen, Yilun and Liu, Chengju and Chen, Qijun and Pang, Jiangmiao},
@@ -271,15 +283,6 @@ If you use the specific pretrained models and benchmarks, please kindly cite the
271
283
year = {2025},
272
284
booktitle={arXiv},
273
285
}
274
-
@misc{wei2025groundslowfastdualsystem,
275
-
title={Ground Slow, Move Fast: A Dual-System Foundation Model for Generalizable Vision-and-Language Navigation},
276
-
author={Meng Wei and Chenyang Wan and Jiaqi Peng and Xiqian Yu and Yuqiang Yang and Delin Feng and Wenzhe Cai and Chenming Zhu and Tai Wang and Jiangmiao Pang and Xihui Liu},
0 commit comments