Skip to content

Commit df23d2a

Browse files
committed
docs(readme): update new bib file.
1 parent 29079fe commit df23d2a

File tree

1 file changed

+19
-14
lines changed

1 file changed

+19
-14
lines changed

README.md

Lines changed: 19 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -103,11 +103,11 @@ If you prefer to build the Docker image by yourself, Check [build-docker-image](
103103

104104
## 1. Data Preparation
105105

106-
Refer to [dataprocess/README.md](dataprocess/README.md) for dataset download instructions. Currently, we support **Argoverse 2**, **Waymo**, **nuScenes**, [**MAN-TruckScene**](https://github.com/TUMFTM/truckscenes-devkit), [**ZOD**](https://github.com/zenseact/zod) and **custom datasets** (more datasets will be added in the future).
106+
Refer to [dataprocess/README.md](dataprocess/README.md) for dataset download instructions. Currently, we support [**Argoverse 2**](https://www.argoverse.org/av2.html), [**Waymo**](https://waymo.com/open/), [**nuScenes**](https://www.nuscenes.org/), [**MAN-TruckScene**](https://github.com/TUMFTM/truckscenes-devkit), [**ZOD**](https://github.com/zenseact/zod) and **custom datasets** (more datasets will be added in the future).
107107

108108
After downloading, convert the raw data to `.h5` format for easy training, evaluation, and visualization. Follow the steps in [dataprocess/README.md#process](dataprocess/README.md#process).
109109

110-
For a quick start, use our **mini processed dataset**, which includes one scene in `train` and `val`. It is pre-converted to `.h5` format with label data ([HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip)/[Zenodo](https://zenodo.org/records/13744999/files/demo_data.zip)).
110+
For a quick start, use our **mini processed dataset**, which includes one scene in `train` and `val`. It is pre-converted to `.h5` format with label data ([HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip)).
111111

112112

113113
```bash
@@ -125,7 +125,7 @@ Some tips before running the code:
125125
* If you want to use [wandb](wandb.ai), replace all `entity="kth-rpl",` to your own entity otherwise tensorboard will be used locally.
126126
* Set correct data path by passing the config, e.g. `train_data=/home/kin/data/av2/h5py/demo/train val_data=/home/kin/data/av2/h5py/demo/val`.
127127

128-
And free yourself from trainning, you can download the pretrained weight from [HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow) and we provided the detail `wget` command in each model section. For optimization-based method, it's train-free so you can directly run with [3. Evaluation](#3-evaluation) (check more in the evaluation section).
128+
And free yourself from trainning, you can download the pretrained weight from [**HuggingFace - OpenSceneFlow**](https://huggingface.co/kin-zhang/OpenSceneFlow) and we provided the detail `wget` command in each model section. For optimization-based method, it's train-free so you can directly run with [3. Evaluation](#3-evaluation) (check more in the evaluation section).
129129

130130
```bash
131131
conda activate opensf
@@ -141,8 +141,8 @@ Train DeltaFlow with the leaderboard submit config. [Runtime: Around 18 hours in
141141
# total bz then it's 10x2 under above training setup.
142142
python train.py model=deltaFlow optimizer.lr=2e-3 epochs=20 batch_size=2 num_frames=5 loss_fn=deflowLoss "voxel_size=[0.15, 0.15, 0.15]" "point_cloud_range=[-38.4, -38.4, -3.2, 38.4, 38.4, 3.2]" +optimizer.scheduler.name=WarmupCosLR +optimizer.scheduler.max_lr=2e-3 +optimizer.scheduler.total_steps=20000
143143

144-
# Pretrained weight can be downloaded through:
145-
wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/flow4d_best.ckpt
144+
# Pretrained weight can be downloaded through (av2), check all other datasets in the same folder.
145+
wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/deltaflow/deltaflow-av2.ckpt
146146
```
147147

148148
#### Flow4D
@@ -359,16 +359,21 @@ If you find it useful, please cite our works:
359359
doi={10.1109/ICRA57147.2024.10610278}
360360
}
361361
@article{zhang2025himo,
362-
title={HiMo: High-Speed Objects Motion Compensation in Point Clouds},
363-
author={Zhang, Qingwen and Khoche, Ajinkya and Yang, Yi and Ling, Li and Sina, Sharif Mansouri and Andersson, Olov and Jensfelt, Patric},
364-
year={2025},
365-
journal={arXiv preprint arXiv:2503.00803},
362+
title={{HiMo}: High-Speed Objects Motion Compensation in Point Cloud},
363+
author={Zhang, Qingwen and Khoche, Ajinkya and Yang, Yi and Ling, Li and Mansouri, Sina Sharif and Andersson, Olov and Jensfelt, Patric},
364+
journal={IEEE Transactions on Robotics},
365+
year={2025},
366+
volume={41},
367+
number={},
368+
pages={5896-5911},
369+
doi={10.1109/TRO.2025.3619042}
366370
}
367-
@article{zhang2025deltaflow,
368-
title={{DeltaFlow}: An Efficient Multi-frame Scene Flow Estimation Method},
369-
author={Zhang, Qingwen and Zhu, Xiaomeng and Zhang, Yushan and Cai, Yixi and Andersson, Olov and Jensfelt, Patric},
370-
year={2025},
371-
journal={arXiv preprint arXiv:2508.17054},
371+
@inproceedings{zhang2025deltaflow,
372+
title={{DeltaFlow}: An Efficient Multi-frame Scene Flow Estimation Method},
373+
author={Zhang, Qingwen and Zhu, Xiaomeng and Zhang, Yushan and Cai, Yixi and Andersson, Olov and Jensfelt, Patric},
374+
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems},
375+
year={2025},
376+
url={https://openreview.net/forum?id=T9qNDtvAJX}
372377
}
373378
```
374379

0 commit comments

Comments
 (0)