You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+12-7Lines changed: 12 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,35 +13,39 @@ This repository houses performance benchmarks for [Parcels](https://github.com/O
13
13
You can run the linting with `pixi run lint`
14
14
15
15
> [!IMPORTANT]
16
-
> The default path for the benchmark data is set by [pooch.os_cache](https://www.fatiando.org/pooch/latest/api/generated/pooch.os_cache.html), which typically is a subdirectory of your home directory. Currently, you will need at least 50GB of disk space available to store the benchmark data.
17
-
> To change the location of the benchmark data cache, you can set the environment variable `PARCELS_DATADIR` to a preferred location to store the benchmark data.
18
-
16
+
> The default path for the benchmark data is set by [pooch.os_cache](https://www.fatiando.org/pooch/latest/api/generated/pooch.os_cache.html), which typically is a subdirectory of your home directory. Currently, you will need at least 50GB of disk space available to store the benchmark data.
17
+
> To change the location of the benchmark data cache, you can set the environment variable `PARCELS_DATADIR` to a preferred location to store the benchmark data.
19
18
20
19
To view the benchmark data
21
20
22
21
-`pixi run asv publish`
23
22
-`pixi run asv preview`
24
23
25
24
## Contributing benchmark runs
25
+
26
26
We value seeing how Parcels benchmarks perform on a variety of systems. When you run the benchmarks, this adds data to the `results/` subdirectory in this repository. After running the benchmarks, you can commit the changes made to the `results/` subdirectory and open a pull request to contribute your benchmark results.
27
27
28
28
### Parcels Community Members
29
+
29
30
Members of the Parcels community can contribute benchmark data using the following steps
30
31
31
32
1.[Create a fork of this repository](https://github.com/Parcels-code/parcels-benchmarks/fork)
4. Commit your benchmark data and push the changes back to your fork, e.g.
48
+
45
49
```
46
50
git add results
47
51
git commit -m "Add benchmark data"
@@ -50,22 +54,23 @@ git push origin main
50
54
51
55
5.[Open a pull request from your fork](https://github.com/Parcels-code/parcels-benchmarks/compare)
52
56
53
-
54
-
55
57
## Adding benchmarks
56
-
Adding benchmarks for parcels typically involves adding a dataset and defining the benchmarks you want to run.
57
58
59
+
Adding benchmarks for parcels typically involves adding a dataset and defining the benchmarks you want to run.
58
60
59
61
### Adding new data
62
+
60
63
Data is hosted remotely on a SurfDrive managed by the Parcels developers. You will need to open an issue on this repository to start the process of getting your data hosted in the shared SurfDrive.
61
64
Once your data is hosted in the shared SurfDrive, you can easily add your dataset to the benchmark dataset manifest using
65
+
62
66
```
63
67
pixi run benchmark-setup pixi add-dataset --name "Name for your dataset" --file "Path to ZIP archive in the SurfDrive"
64
68
```
65
69
66
70
During this process, the dataset will be downloaded and a complete entry will be added to the [parcels_benchmarks/benchmarks.json](./parcels_benchmarks/benchmarks.json) manifest file. Once updated, this file can be committed to this repository and contributed via a pull request.
67
-
71
+
68
72
### Writing the benchmarks
73
+
69
74
This repository uses [ASV](https://asv.readthedocs.io/) for running benchmarks. You can add benchmarks by including a python script in the `benchmarks/` subdirectory. Within each `benchmarks/*.py` file, we ask that you define a class for the set of benchmarks you plan to run for your dataset. You can use the existing benchmarks as a good starting point for writing your benchmarks.
70
75
71
76
To learn more about writing benchmarks compatible with ASV, see the [ASV "Writing Benchmarks" documentation](https://asv.readthedocs.io/en/latest/writing_benchmarks.html)
0 commit comments