You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat(c++,pt-expt): add .pt2 (AOTInductor) C/C++ inference with DPA1/DPA2/DPA3 support (#5298)
Add C/C++ inference support for the `.pt2` (torch.export / AOTInductor)
backend, covering all major descriptor types: SE_E2_A, DPA1, DPA2, and
DPA3.
### C/C++ inference backend (`DeepPotPTExpt`)
- New `DeepPotPTExpt` backend that loads `.pt2` models via
`torch::inductor::AOTIModelContainerRunnerCpu`
- Supports PBC, NoPbc, fparam/aparam, multi-frame batching, atomic
energy/virial, LAMMPS neighbor list (with ghost atoms, 2rc padding, type
selection)
- Registered alongside existing PT/TF/JAX/PD backends via the `.pt2`
file extension
### dpmodel fixes for torch.export compatibility
- Replace `[:, :nloc]` slicing with `xp_take_first_n()` in DPA1, DPA2,
DPA3, and repflows/repformers — the original slicing creates `Ne(nall,
nloc)` shape constraints that fail when `nall == nloc` (NoPbc case)
- Replace flat `(nf*nall,)` indexing in `dpa1.py` and `exclude_mask.py`
with `xp_take_along_axis`
- Replace `xp.reshape(mapping, (nframes, -1, 1))` with `xp.expand_dims`
in repflows/repformers — the `-1` resolves to `nall` during tracing
### pt_expt serialization
- `.pt2` export via `torch.export.export` → `aot_compile` → package as
zip
- Python inference via `torch._inductor.aoti_load_package`
### Bug fix in all C++ backends
- Fix ghost-to-local mapping when virtual atoms are present — the old
code `mapping[ii] = lmp_list.mapping[fwd_map[ii]]` used post-filter
indices as original indices; fixed to `mapping[ii] =
fwd_map[lmp_list.mapping[bkw_map[ii]]]`
- Fix use-after-free in `DeepPotPTExpt.cc` where `torch::from_blob`
referenced a local vector after it went out of scope
### Test infrastructure
- Model generation scripts (`gen_dpa1.py`, `gen_dpa2.py`, `gen_dpa3.py`,
`gen_fparam_aparam.py`) that build from dpmodel config → serialize →
export to both `.pth` and `.pt2` with identical weights
- Remove pre-committed `.pth` files; regenerate in CI via
`convert-models.sh`
- C++ tests for all descriptor types: SE_E2_A, DPA1, DPA2, DPA3 (both
`.pth` and `.pt2`, PBC + NoPbc, double + float)
- Python unit tests for pt_expt inference (`test_deep_eval.py`)
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **New Features**
* Added support for PyTorch exportable (.pt2) models and runtime
detection, enabling AOTInductor-based inference across interfaces.
* **Bug Fixes**
* Improved neighbor/embedding extraction and broadcasting to increase
backend export compatibility and robustness.
* **Tests**
* Added extensive C++ and Python test suites and reference-generation
scripts to validate .pt2 inference paths and cross-format consistency.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: Han Wang <wang_han@iapcm.ac.cn>
0 commit comments