Determinism support 1/N#1281
Conversation
70cfe01 to
3678db3
Compare
|
@mar-yan24 thank you for contributing this feature to mujoco warp! |
|
@mar-yan24 fyi there is a warp draft pr for introducing determinism in warp NVIDIA/warp#1355 |
|
We just discussed this - it could be worth pursuing this approach in parallel to Warp's low-level support for determinism as they two different approaches may have different performance tradeoffs. |
|
@thowell, thanks for the input! Actually I haven't kept up with Warp as closely recently so I'll take a look at the PR brought up there and see if there are similar ideas compared to what I have in my current plan. Regarding @erikfrey's comment, I don't mind working on the rest of the determinism implementation for this PR and comparing the performance once finished. I'll probably continue working on this for the week and I'll try to finish by around a week from now for the full end-to-end implementation. Thank you both for the info/updates! |
|
Thanks for the review @thowell! The changes should be good to go. I am planning the next determinism steps after this PR like constraint row allocation and actuator moment allocation. Before I continue building, would you prefer that I keep extending this branch/PR so the work is all on this PR or split it up into separate requests for review. Either works for me. |
|
@mar-yan24 lets create separate prs for the next deterministic features. thanks! |
|
@mar-yan24 can we benchmark the performance of this pr with the built-in determinism from warp NVIDIA/warp#1355? probably makes sense to confirm that writing custom deterministic kernels is more performant compared to the general purpose warp determinism functionality. thanks! |
|
@thowell I just tried running the NVIDIA/warp#1355 build on my machine and it seems there are several issues with it that currently make it incompatible with mujoco_warp. When I first tried running, it just crashed, so I disabled graph capture. I think the Warp PR crashes inside The PR's codegen looks up the destination array for each I don't mind helping try and fix this/look into this deeper, but I suspect there may be other blockers as well. In the meantime, I can draft a minimal repro kernel for the PR? Let me know your thoughts. |
Add opt.deterministic flag with post-narrowphase contact sort (#562)
I was previously working on differentiation support for MJWarp but I am taking a break from that because the contacts are giving me a hard time. I can't seem to figure out how to optimize the gradient landscape while keeping good dynamics from rigid contact and coulombic friction. Thus, I have decided spending some time on this would be of more use for now lol.
Summary
This is one of several phased additions. This is a basic PR that just adds an opt-in
opt.deterministicflag that sorts contacts after narrowphase by(worldid, geom0, geom1, geomcollisionid)usingwp.utils.radix_sort_pairs. This fixes the most upstream source of run-to-run non-determinism on GPU: contact index permutation fromatomic_addcounters in narrowphase and CCD. After sorting,d.contact.*is rewritten in canonical order before any downstream kernel reads it.Downstream state (
qacc,qvel,qpos, constraint force, solver reductions) is not yet bitwise reproducible. Follow-ups needed, see Roadmap below.Changes
types.py:Option.deterministic: bool(defaultFalse). Docstring notes phase 1 scope and ~5-10% overhead.io.py: Wires the default input_model, adds the field tooverride_modelsoopt.deterministic=Trueworks from the CLI.collision_driver.py:_sort_contacts()runs after_narrowphase()when the flag is set. Composite 32-bit key:((world * ngeom + geom0) * ngeom + geom1) * gcid_max + gcid. Falls back togcid_max = 1on int32 overflow. Three gather-permute kernels rewrited.contact.*from temp buffers.determinism_test.py: 8 parameterized tests -> contact ordering, field bitwise equality across repeat runs, sort key monotonicity, default-false smoke check.Test results
8/8 pass on RTX 4060 Laptop (sm_89, Ada Lovelace), Warp 1.13.0.dev20260302:
Coverage: contact geom arrays bitwise identical across 3 runs x 10 steps at two
nworldsizes. All contact fields (dist, pos, frame, dim, worldid, geomcollisionid) bitwise identical. Sort key monotonicity verified. DefaultFalseconfirmed (no cost unless opted in).Benchmarks
I had claude help me formulate some benchmarks to see the potential overhead with this implementation. 3 trials x 500 steps, 50-step warmup,
wp.synchronize()fences around the timing window.Newton + Dense, RTX 4060 Laptop (sm_89)
CG + Sparse, RTX 4060 Laptop (sm_89)
All configs under 25% overhead. Worst case is +17.7% (humanoid nworld=512, Newton+Dense); actually one trial in that config hit +28.9% but had 208 ms stdev vs ~65 ms for adjacent configs. Im pretty sure that is likely thermal throttling on my crappy laptop lol.
Overhead % is roughly flat across
nworldwithin each solver path. The bottleneck is the 17wp.empty_likecalls in_sort_contacts, not the GPU sort itself. I am planning on implementing pre-allocated scratch buffers and will fix this in a follow-up, let me know thoughts.Roadmap
Full reproducibility obviously needs more phases:
My rough plan at the moment is to work on constraint row allocation next, this is probably what will help open up downstream effects. After that I will work on actuator moment allocation. Both of these will be done using prefix-sum.
The biggest fix later will be implementing solver reductions, i.e. cost, grad_dot, search_dot. This should make
d.qaccbitwise stable and thus follows qpos and qvel as well.This current PR does not make simulation bitwise reproducible end to end. It guarantees only that
d.contact.*is stable across runs of the same input. End to end full state reproducibility will probably come after some more phases are released.