You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* cuda(histlib): cuda library from icp-flow project.
* docs: update README with icp-flow in the official implementation.
* conf(optimization-based): update all config files.
* todo: update model file, double check with yancong and Qingwen confirm that: icp-flow results can be reproduced and tested.
* docs: fix small typo, and starting updating model file.
* feat(icp): core icp files, tested successfully.
* feat(deflowpp): update deflowpp model.
* hotfix(ssf): bug fixes in seflow+ssf as ssf concat two pc and forget to - index.
* feat(autolabel): update more autolabel through himo paper.
* add data with lidar_id and lidar_dt
* add flow_instances for afterward easy deltaflow update.
* feat(seflowpp): the full seflowpp process and train scripts.
* tested with demo good, need double check the results on av2 and update the training time & weight link also.
* fix(trainer): add seflowpp into trainer for folder name.
* data(zod): update zod extraction scripts.
* it could be a good reference for users to extract other data into h5 files etc.
* !feat(lr): update optimizer to new structure.
- update from DeltaFlow.
- align some format with copilot comments.
* update other submodule repo to align the lr changed.
* dcos(slurm): update slurm script, for reader to easy check how is the trainining setup.
* fix(env): fix some potiential env issue later
* add some notes
* fix(av2): instance label typo.
* fix(process): update key name in new version for seflow-variant process.
* hotfix(eval): updating num_frames into eval.
* hotfix(eval/test): for history frames, we update keys' name and it need rm ground mask also.
* small fix on ssl_label to None if under supervise training.
* feat(aug): merge data aug strategy from DeltaFlow project.
* docs(README): update readme.
align with paper.
* docs(README): update arxiv link
* feat(deltaflow): update deltaflow model file.
* checked with trained weight.
* conf: update deltaflow conf files
update README to show the progress.
* hotfix(eval): not assert but return if no gt class pt etc.
to avoid killed during eval.
* loss(deltaflow): add deltaflow loss.
* docs(README): update readme.
* revert to OpenSceneFlow readme for the codebase.
* docs: update README.
* update av2_mode to data_mode.
* revert back for zod process file
* runner metric is good in last version as it's range_bucket have different meaning.
* docs: update for diff opt methods.
* hotfix(dataset): fix eval_mask in dataset.
* we only evaluated points on non-ground point.
* no need print ssf_metrics since our train range is out of evaluation.
* docs(bib): add deltaflow bib back.
* docs(readme): update new bib file.
* feat(log): save the evaluation log and add inlineTee for output file.
* for easy sharing score afterward etc.
* update docs about upcoming works also.
* style: clean up the developed code previously.
* add readh5 and create_eval_pkl
* docs(bib): update bib.
* docs(data): update README demo data link.
-**HiMo: High-Speed Objects Motion Compensation in Point Clouds** (SeFlow++)
20
25
*Qingwen Zhang, Ajinkya Khoche, Yi Yang, Li Ling, Sina Sharif Mansouri, Olov Andersson, Patric Jensfelt*
@@ -103,11 +108,11 @@ If you prefer to build the Docker image by yourself, Check [build-docker-image](
103
108
104
109
## 1. Data Preparation
105
110
106
-
Refer to [dataprocess/README.md](dataprocess/README.md) for dataset download instructions. Currently, we support **Argoverse 2**, **Waymo**, **nuScenes**, [**MAN-TruckScene**](https://github.com/TUMFTM/truckscenes-devkit), [**ZOD**](https://github.com/zenseact/zod) and **custom datasets** (more datasets will be added in the future).
111
+
Refer to [dataprocess/README.md](dataprocess/README.md) for dataset download instructions. Currently, we support [**Argoverse 2**](https://www.argoverse.org/av2.html), [**Waymo**](https://waymo.com/open/), [**nuScenes**](https://www.nuscenes.org/), [**MAN-TruckScene**](https://github.com/TUMFTM/truckscenes-devkit), [**ZOD**](https://github.com/zenseact/zod) and **custom datasets** (more datasets will be added in the future).
107
112
108
113
After downloading, convert the raw data to `.h5` format for easy training, evaluation, and visualization. Follow the steps in [dataprocess/README.md#process](dataprocess/README.md#process).
109
114
110
-
For a quick start, use our **mini processed dataset**, which includes one scene in `train` and `val`. It is pre-converted to `.h5` format with label data ([HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip)/[Zenodo](https://zenodo.org/records/13744999/files/demo_data.zip)).
115
+
For a quick start, use our **mini processed dataset**, which includes one scene in `train` and `val`. It is pre-converted to `.h5` format with label data ([HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/demo-data-v2.zip)).
111
116
112
117
113
118
```bash
@@ -125,14 +130,26 @@ Some tips before running the code:
125
130
* If you want to use [wandb](wandb.ai), replace all `entity="kth-rpl",` to your own entity otherwise tensorboard will be used locally.
126
131
* Set correct data path by passing the config, e.g. `train_data=/home/kin/data/av2/h5py/demo/train val_data=/home/kin/data/av2/h5py/demo/val`.
127
132
128
-
And free yourself from trainning, you can download the pretrained weight from [HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow) and we provided the detail `wget` command in each model section. For optimization-based method, it's train-free so you can directly run with [3. Evaluation](#3-evaluation) (check more in the evaluation section).
133
+
And free yourself from trainning, you can download the pretrained weight from [**HuggingFace - OpenSceneFlow**](https://huggingface.co/kin-zhang/OpenSceneFlow) and we provided the detail `wget` command in each model section. For optimization-based method, it's train-free so you can directly run with [3. Evaluation](#3-evaluation) (check more in the evaluation section).
129
134
130
135
```bash
131
136
conda activate opensf
132
137
```
133
138
134
139
### Supervised Training
135
140
141
+
#### DeltaFlow
142
+
143
+
Train DeltaFlow with the leaderboard submit config. [Runtime: Around 18 hours in 10x RTX 3080 GPUs.]
144
+
145
+
```bash
146
+
# total bz then it's 10x2 under above training setup.
Train Feed-forward SSL methods (e.g. SeFlow/SeFlow++/VoteFlow etc), we needed to:
184
-
1) process auto-label process.
201
+
1) process auto-label process for training. Check [dataprocess/README.md#self-supervised-process](dataprocess/README.md#self-supervised-process) for more details. We provide these inside the demo dataset already.
185
202
2) specify the loss function, we set the config here for our best model in the leaderboard.
186
203
187
204
#### SeFlow
@@ -240,7 +257,8 @@ python save.py model=fastnsf
240
257
241
258
## 3. Evaluation
242
259
243
-
You can view Wandb dashboard for the training and evaluation results or upload result to online leaderboard.
260
+
You can view Wandb dashboard for the training and evaluation results or upload result to online leaderboard.
261
+
<!-- Three-way EPE and Dynamic Bucket-normalized are evaluated within a 70x70m range (followed Argoverse 2 online leaderboard). No ground points are considered in the evaluation. -->
244
262
245
263
Since in training, we save all hyper-parameters and model checkpoints, the only thing you need to do is to specify the checkpoint path. Remember to set the data path correctly also.
246
264
@@ -249,7 +267,7 @@ Since in training, we save all hyper-parameters and model checkpoints, the only
[*OpenSceneFlow*](https://github.com/KTH-RPL/OpenSceneFlow) is originally designed by [Qingwen Zhang](https://kin-zhang.github.io/) from DeFlow and SeFlow.
347
+
It is actively maintained and developed by the community (ref. below works).
329
348
If you find it useful, please cite our works:
330
349
331
350
```bibtex
@@ -347,16 +366,26 @@ If you find it useful, please cite our works:
347
366
doi={10.1109/ICRA57147.2024.10610278}
348
367
}
349
368
@article{zhang2025himo,
350
-
title={HiMo: High-Speed Objects Motion Compensation in Point Clouds},
351
-
author={Zhang, Qingwen and Khoche, Ajinkya and Yang, Yi and Ling, Li and Sina, Sharif Mansouri and Andersson, Olov and Jensfelt, Patric},
352
-
year={2025},
353
-
journal={arXiv preprint arXiv:2503.00803},
369
+
title={{HiMo}: High-Speed Objects Motion Compensation in Point Cloud},
370
+
author={Zhang, Qingwen and Khoche, Ajinkya and Yang, Yi and Ling, Li and Mansouri, Sina Sharif and Andersson, Olov and Jensfelt, Patric},
371
+
journal={IEEE Transactions on Robotics},
372
+
year={2025},
373
+
volume={41},
374
+
pages={5896-5911},
375
+
doi={10.1109/TRO.2025.3619042}
376
+
}
377
+
@inproceedings{zhang2025deltaflow,
378
+
title={{DeltaFlow}: An Efficient Multi-frame Scene Flow Estimation Method},
379
+
author={Zhang, Qingwen and Zhu, Xiaomeng and Zhang, Yushan and Cai, Yixi and Andersson, Olov and Jensfelt, Patric},
380
+
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems},
381
+
year={2025},
382
+
url={https://openreview.net/forum?id=T9qNDtvAJX}
354
383
}
355
-
@article{zhang2025deltaflow,
356
-
title={{DeltaFlow}: An Efficient Multi-frame Scene Flow Estimation Method},
357
-
author={Zhang, Qingwen and Zhu, Xiaomeng and Zhang, Yushan and Cai, Yixi and Andersson, Olov and Jensfelt, Patric},
358
-
year={2025},
359
-
journal={arXiv preprint arXiv:2508.17054},
384
+
@misc{zhang2025teflow,
385
+
title={{TeFlow}: Enabling Multi-frame Supervision for Feed-forward Scene Flow Estimation},
386
+
author={Zhang, Qingwen and Jiang, Chenhan and Zhu, Xiaomeng and Miao, Yunqi and Zhang, Yushan and Andersson, Olov and Jensfelt, Patric},
387
+
year={2025},
388
+
url={https://openreview.net/forum?id=h70FLgnIAw}
360
389
}
361
390
```
362
391
@@ -373,13 +402,14 @@ And our excellent collaborators works contributed to this codebase also:
373
402
pages={3462-3469},
374
403
doi={10.1109/LRA.2025.3542327}
375
404
}
376
-
@article{khoche2025ssf,
377
-
title={SSF: Sparse Long-Range Scene Flow for Autonomous Driving},
405
+
@inproceedings{khoche2025ssf,
406
+
title={{SSF}: Sparse Long-Range Scene Flow for Autonomous Driving},
378
407
author={Khoche, Ajinkya and Zhang, Qingwen and Sanchez, Laura Pereira and Asefaw, Aron and Mansouri, Sina Sharif and Jensfelt, Patric},
379
-
journal={arXiv preprint arXiv:2501.17821},
380
-
year={2025}
408
+
booktitle={2025 IEEE International Conference on Robotics and Automation (ICRA)},
409
+
year={2025},
410
+
pages={6394-6400},
411
+
doi={10.1109/ICRA55743.2025.11128770}
381
412
}
382
-
383
413
@inproceedings{lin2025voteflow,
384
414
title={VoteFlow: Enforcing Local Rigidity in Self-Supervised Scene Flow},
385
415
author={Lin, Yancong and Wang, Shiming and Nan, Liangliang and Kooij, Julian and Caesar, Holger},
0 commit comments