Contributions (pull requests) are very welcome! Here's how to get started.
We assume that you have uv installed. Now fork the library on GitHub. Then clone and install the library:
git clone https://github.com/your-username-here/optimistix.git
cd optimistix
uv run prek install # Creates a local venv + installs dependencies + installs pre-commit hooks.Now make your changes. Make sure to include additional tests if necessary. Next verify the tests all pass:
uv run pytestIf your changes could affect solver (or compilation) performance, please run the benchmark tests with
uv run pytest benchmarks/ --benchmark-onlyYou can run benchmarks before or after your change, and also save more extensive results for analysis. For more on this, skip to the "Benchmarking" section below. Then push your changes back to your fork of the repository:
git pushFinally, open a pull request on GitHub!
Make your changes. You can then build the documentation by doing
uv run mkdocs serveYou can then see your local copy of the documentation by navigating to localhost:8000 in a web browser.
If you're interested in more extensive benchmarking - for instance when contributing a new solver - this section is for you. (Note that benchmarks are not run by default, and --benchmark-only is required to override this.)
You can save benchmark results with
uv run pytest benchmarks/ --benchmark-save=<benchmark_name> --benchmark-only
and compare against previous runs with uv run pytest --benchmark-compare, which will automatically pull in the last saved commit, but also takes run iDs as arguments. See the pytest-benchmark documentation for more command line options.
The benchmark-autosave option will specify the commit iD, instead of a user-defined name.
Make sure that you are running benchmarks with a clean working tree, so you can trace how changes affect performance!
For convenience, we support some custom flags:
--min-dimension=<int>, --max-dimension=<int>benchmarks can be run on a subset of problems based on problem size.--scipybenchmarks of our solvers are run against the corresponding Python implementation. You might want to limit problem dimension here, they can be quite slow.
pytest's -k flags also work in this setting to enable selective execution of benchmarking functions.
Analysing benchmark results
You can find a script to analyse benchmark results in benchmarks/profile.py. Run it with
python benchmarks/profile.py <platform> <python_version> <precision> <iD> <kind> *solver_namesWhere platform refers to the platform on which the benchmarks were run (e.g. Darwin), precision is the numerical precision, e.g. 32bit, and iD is the benchmark run, a four-digit integer.
These are necessary to identify the saved results for the specific run. kind specifies if runtime or compilation benchmarks are to be compared, and solver names should be given as strings. These are defined in benchmarks/test_benchmarks.py for every benchmarked solver, e.g. optx.BFGS. Putting this together, an example call would be
python benchmarks/profile.py Darwin 3.13 64bit 0001 runtime optx.BFGS optx.LBFGSIf you are contributing a solver
In this case, you're probably reasonably familiar with the alternatives out there - if implementations we could compare to exist, please add them to the listed solvers in benchmarks/test_benchmarks.py, including hyperparameters such as solver tolerances to get as fair of a comparison as is feasible.