perf(pt2): optimize .pt2 C++ inference path#5407
perf(pt2): optimize .pt2 C++ inference path#5407wanghan-iapcm wants to merge 34 commits intodeepmodeling:masterfrom
Conversation
The reshape-first approach changed the error from a custom ValueError to a generic reshape error, breaking test_self_exception assertions.
- Remove unused _min_nbor_dist variables in main.py - Remove unused _MULTITASK_JSON/_MULTITASK_SHAREFIT_JSON globals - Fix duplicate unittest import (use unittest.mock.patch directly) - Bind test socket to 127.0.0.1 instead of all interfaces - Remove redundant nframes assignment in _compile_model
Under DDP self.wrapper is DistributedDataParallel which has no .model attribute. Use .module to access the underlying ModelWrapper.
Cover the _compile_model DDP unwrap fix with single-task and multi-task tests that enable_compile=True under 2-rank gloo DDP.
Replace aot_eager+padding+manual recompile with symbolic make_fx + torch.compile(backend="inductor", dynamic=True). The compiled graph natively handles varying nframes/nloc/nall so the per-batch padding and runtime _recompile pass can be removed. Use a trace-time nframes of 7 (prime) and reshape with -1 in dpmodel (general_fitting, env_mat) to prevent PyTorch's symbolic tracer from unifying the batch dim with numb_fparam / numb_aparam / ntypes / dim_case_embd. Add TestCompiledVaryingNframesWithParams covering collisions with fparam/aparam, and TestCompileCaseEmbdVaryingNframes covering dim_case_embd > 0 with runtime nframes matching the embed dim.
…modeling#5393 Add silut/custom_silu support to _torch_activation using native torch ops (sigmoid, tanh, where) so the custom silu stays traceable by make_fx / torch.export. Cross-backend consistency tests cover multiple thresholds across the silu/tanh branches, and a pt_expt unit file exercises default/custom threshold, gradient flow, make_fx, and torch.export. Also port DescrptBlockRepformers accessor tests (get_rcut_smth, get_env_protection). The underlying accessor methods already exist on this branch; these tests guard against regressions.
Extend the compiled-vs-uncompiled assertions in TestCompiledConsistency (single-task) and _check_compile_correctness (multi-task) to also cover ``atom_energy`` and the reduced ``virial``. Atomic virial is intentionally not exercised because training never sets ``do_atomic_virial=True``.
…deling#5393 Adds the remaining tests from PR deepmodeling#5393 that were not yet on this branch: ``test_training_loop_compiled_silu`` (silu activation under torch.compile) and ``TestCompiledVaryingNatoms`` (compiled training across systems with different atom counts). Also drops a stray unused ``threshold`` variable in ``test_silut_below_threshold_is_silu`` to match the upstream PR.
Replace the two finite-loss smoke tests with a single test that builds both trainers, syncs weights, and per-step asserts identical predictions, loss, and per-parameter gradients (second-order through the force loss). Also add a silu full-model consistency test and write virial.npy in the small synthetic system so the virial passthrough is exercised on every step. Factor the prediction/grad comparison loops into shared helpers.
Parametrize TestCompiledVaryingNatoms over se_e2_a, DPA2 and DPA3 with strict atol=rtol=1e-10 on float64 (machine epsilon). DPA1 (se_atten) is intentionally omitted: its compiled path is intermittently incorrect (~20% of compiles produce grad diffs up to 0.67 at the first embedding layer), and including it would have required masking the bug with a loose tolerance.
Enable use_three_body=True on the DPA2 varying-natoms test so the compiled three-body neighbor path is also covered. three_body_rcut=3.0 matches the repformer rcut and is large enough to find neighbors in the 6-atom small system (~2.75Å nearest-neighbor distance).
…ckward decomp Revert (-1, nloc*nnei, ...) reshapes back to (nf, -1, ...) in env_mat.py and general_fitting.py. The -1-for-nf pattern breaks zero-atom systems: numpy cannot infer -1 when other dims multiply to zero (0/0), and torch.export shape assertions hit Mod(0,0). Using nf is safe because _TRACE_NFRAMES=7 already prevents symbolic-tracer specialisation during training compile. Add silu_backward decomposition table to make_fx in training.py so inductor can compile second-order gradients through silu without requiring a fused higher-order derivative kernel.
Replace assert with if/raise ValueError for user-facing config validation (data_stat_protect, finetune branch/head checks). Wrap train() in try/finally for destroy_process_group cleanup. Add parents=True, exist_ok=True to stat_file mkdir. Add strict=True to zip() calls. Fix minor test issues.
Match the dpmodel try/except pattern so shape mismatches produce a clear error instead of a raw RuntimeError from torch.view.
DPA1's se_atten attention path produces incorrect force gradients under inductor compile. Add get_numb_attn_layer() API and isinstance guard to reject attn_layer>0 at compile time with a clear error message. DPA1 with attn_layer=0 compiles correctly and is now tested. The guard also covers se_atten_v2 which inherits from DPA1.
DPA1 with se_atten attention layers compiles correctly under inductor+dynamic — the _check_varying_natoms test passes at 1e-10 tolerance for attn_layer=2. The guard added in 1e694a3 was based on an inconclusive diagnostic; replace the rejection test with a compile-correctness test for DPA1 with attention.
The compile guard that used this method was removed in 6d39ddf.
Use ``pt/model/water`` instead of the ``pt/water`` symlink in test_fitting_stat.py to avoid FileNotFoundError on CI. Remove unused ``get_numb_attn_layer()`` from DescrptDPA1.
…faults After Graph.erase_node() stale C-level prev/next pointers may remain on neighbouring Node objects. Dynamo re-tracing can dereference them and segfault. Rebuild into a fresh graph to eliminate stale pointers.
- max_fusion_size 8 → 64 to avoid scheduler timeouts on large descriptors - add triton.mix_order_reduction=False for PyTorch <=2.11 bugs (pytorch/pytorch#174379, #178080, #179494) - hardcode defaults, let user compile_options override per-key
DDPOptimizer splits the inner compiled graph at bucket boundaries, producing subgraph outputs with symbolic integers that crash AOT Autograd (pytorch/pytorch#134182).
max_fusion_size=64 causes DPA3 force divergence and triton.mix_order_reduction=False causes DPA1-attention divergence (both on CPU, float64, 1e-10 tolerance). Revert to max_fusion_size=8 and remove mix_order_reduction (kept as comment for GPU users). Also revert test_fitting_stat.py path back to pt/water symlink which was confirmed working at c2efbf1.
Remove os.environ["DEVICE"] = "cpu" from all worker functions and replace hardcoded backend="gloo" with auto-detected _DDP_BACKEND (nccl on CUDA, gloo on CPU). env.py now evaluates DEVICE naturally based on hardware availability.
…links Move torch._dynamo.config.optimize_ddp = False from module level into _compile_model() so it only applies when compile is active. Resolve symlinks in test_fitting_stat.py data paths for reliable CI access.
TestFparam.tearDown deleted the committed fparam.npy, breaking other tests (e.g. test_fitting_stat) running later in the same CI shard. Also revert the unnecessary .resolve() workaround in test_fitting_stat.
NCCL rejects two ranks sharing the same GPU device, causing all DDP tests to fail on single-GPU CI runners. Skip the entire module when the backend is NCCL and device_count < 2.
Three optimizations for .pt2 (AOTInductor) C++ inference: 1. Replace buildTypeSortedNlist with createNlistTensor: avoids expensive CPU-side distance computation and type-based sorting. The .pt2 model's compiled graph contains format_nlist which handles distance sorting on-device. The new createNlistTensor pads/truncates to the expected nnei (= sum(sel)) stored in model metadata, since torch.export specializes nnei as a static dimension. 2. Export with do_atomic_virial=False by default: avoids 3 extra torch.autograd.grad backward passes in atomic_virial_corr(). The reduced virial (energy_derv_c_redu) is still correct without the correction (it sums to zero). Add --atomic-virial flag to dp convert-backend for users who need per-atom virial (~2.5x cost). C++ raises error if atomic virial requested but model lacks it. 3. Cache mapping tensor: only rebuild when ago==0 (nlist updated), avoiding redundant CPU-to-device copies every step. Also adds nnei and do_atomic_virial to .pt2 metadata for C++ to read.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 7739fadaf0
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| int ncopy = std::min(static_cast<int>(data[ii].size()), expected_nnei); | ||
| for (int jj = 0; jj < ncopy; ++jj) { | ||
| flat_data[static_cast<size_t>(ii) * expected_nnei + jj] = data[ii][jj]; |
There was a problem hiding this comment.
Keep all neighbors for model-side sorting
createNlistTensor truncates each atom’s neighbor row to expected_nnei before inference, but forward_lower only performs distance-based sorting/truncation when input nlist.shape[-1] > sum(sel) (otherwise it passes through). In the new path, the model always receives exactly sum(sel) entries, so unsorted LAMMPS neighbor order is never re-sorted and atoms with more than sum(sel) neighbors can drop closer neighbors arbitrarily, changing energies/forces. This affects any system where raw neighbor counts exceed the selected neighbor budget.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Fixed in b7509db — nlist nnei dimension is now dynamic in the .pt2 export. createNlistTensor pads to nnei+1 (never truncates), so the compiled graph's _format_nlist always enters the distance-sort branch on GPU.
📝 WalkthroughWalkthroughAdds an --atomic-virial flag and threads a do_atomic_virial parameter from CLI and convert_backend through the pt_expt serialization/export pipeline into C++ runtime state; replaces CPU neighbor-list sorting with fixed-width tensor construction and enforces atomic-mode compatibility at compute time. (36 words) Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as CLI
participant Converter as convert_backend
participant Serializer as Serializer (pt_expt/utils/serialization)
participant Cpp as DeepPotPTExpt (C++)
participant LAMMPS as LAMMPS
CLI->>Converter: run convert-backend --atomic-virial
Converter->>Serializer: call out_hook(OUTPUT, data, do_atomic_virial=True)
Serializer->>Serializer: _trace_and_export(..., do_atomic_virial=True)\nrecord metadata["do_atomic_virial"], compute nnei
Serializer->>Cpp: load exported .pt2/.pte with metadata (do_atomic_virial, nnei, mapping)
Cpp->>Cpp: createNlistTensor(raw_nlist, nnei+1)
Cpp->>Cpp: if atomic requested && !do_atomic_virial -> throw deepmd_exception
Cpp->>LAMMPS: provide mapping_tensor and nlist tensor for evaluation
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (3)
source/tests/pt_expt/test_training.py (1)
64-74: Use the new constant intest_compile_warns_dpa1_with_attentionto eliminate duplication.The descriptor dictionary defined inline at lines 1211-1221 in
test_compile_warns_dpa1_with_attentionhas identical values to this new constant. Consider refactoring that test to use_DESCRIPTOR_DPA1_WITH_ATTNinstead of the inline definition to follow the DRY principle.♻️ Refactor the test to use the constant
At lines 1211-1221, replace the inline descriptor definition with the constant:
- descriptor = { - "type": "dpa1", - "sel": 12, - "rcut_smth": 0.50, - "rcut": 3.00, - "neuron": [8, 16], - "axis_neuron": 4, - "attn_layer": 2, - "precision": "float64", - "seed": 1, - } + descriptor = _DESCRIPTOR_DPA1_WITH_ATTN🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@source/tests/pt_expt/test_training.py` around lines 64 - 74, The test_compile_warns_dpa1_with_attention test contains an inline descriptor identical to the module-level _DESCRIPTOR_DPA1_WITH_ATTN constant; replace the inline dictionary in test_compile_warns_dpa1_with_attention with a reference to _DESCRIPTOR_DPA1_WITH_ATTN so the test uses the shared constant (ensure you import or reference the symbol in the same scope and remove the duplicated literal).source/tests/infer/gen_spin.py (1)
112-115: Asymmetric atomic-virial setting across .pth and .pt2.The
.pthexport at Line 112 omitsatomic_virial=True, while the.pt2export at Line 115 passes it. Parity verification downstream (Lines 186-195) compares energies/forces/force_mag but not virials, so this is unlikely to cause test failures. However, if future parity checks extend to (atomic) virials, the two exports would diverge. Consider passing the flag consistently, or add a brief comment noting the intentional asymmetry (i.e.,.pthpath always computes atomic virial regardless of flag).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@source/tests/infer/gen_spin.py` around lines 112 - 115, The two calls to convert_backend are inconsistent: convert_backend(INPUT=yaml_path, OUTPUT=pth_path) omits atomic_virial while convert_backend(INPUT=yaml_path, OUTPUT=pt2_path, atomic_virial=True) sets it, which will produce differing virial outputs; update the first call that writes pth_path to include atomic_virial=True for parity (or, if the asymmetry is intentional, add a concise inline comment next to the convert_backend call for pth_path explaining that the .pth export intentionally omits the flag because it always computes atomic virial), referencing the convert_backend function and the INPUT/OUTPUT/atomic_virial parameters and the pth_path/pt2_path variables so reviewers can locate the change.deepmd/main.py (1)
918-925: LGTM. Flag name, action, default, and help text are consistent with the signature ofconvert_backend(atomic_virial: bool = False, ...); argparse will populateargs.atomic_virial, which the downstreamdeepmd_maindispatch can forward as a kwarg.One minor nit: the help text states "~2.5x inference cost," but the PR description quotes three extra
torch.autograd.gradbackward passes inatomic_virial_corr(). If that multiplier was measured on a specific model/device, consider softening to "~2-3x" or "significant" to avoid implying a universal figure; otherwise leave as is.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/main.py` around lines 918 - 925, The help text for the parser argument defined in parser_convert_backend.add_argument("--atomic-virial", ...) claims a specific "~2.5x inference cost" which may be misleading; update the help string for atomic_virial to soften the performance claim (e.g., "approximately 2–3x inference cost" or "significant inference overhead") so it doesn't imply a universal measurement—edit the help parameter where parser_convert_backend.add_argument for "--atomic-virial" is defined to use the revised wording.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@deepmd/entrypoints/convert_backend.py`:
- Around line 35-42: The conversion currently silently ignores the
--atomic-virial flag when the backend hook (out_hook) does not accept
do_atomic_virial; change this to fail fast: after obtaining sig =
inspect.signature(out_hook) check if atomic_virial is True and
"do_atomic_virial" not in sig.parameters, then raise a clear error (e.g.,
SystemExit or ValueError) explaining that the target backend does not support
--atomic-virial rather than calling out_hook without the parameter; otherwise
proceed to call out_hook(OUTPUT, data, do_atomic_virial=atomic_virial) or
out_hook(OUTPUT, data) as before.
In `@source/api_cc/src/commonPTExpt.h`:
- Around line 461-472: createNlistTensor silently truncates rows longer than
expected_nnei, which can change neighbor selection versus a sort-then-truncate
strategy; modify createNlistTensor to validate each row size and fail fast
(e.g., throw or assert) if any data[ii].size() > expected_nnei instead of using
std::min truncation. Update the function that builds flat_data
(createNlistTensor) to check row length before copying, and include a clear
error message referencing expected_nnei and the offending row size; this keeps
callers (and upstream padding()/format_nlist logic) from accidentally producing
incorrect neighbor sets.
---
Nitpick comments:
In `@deepmd/main.py`:
- Around line 918-925: The help text for the parser argument defined in
parser_convert_backend.add_argument("--atomic-virial", ...) claims a specific
"~2.5x inference cost" which may be misleading; update the help string for
atomic_virial to soften the performance claim (e.g., "approximately 2–3x
inference cost" or "significant inference overhead") so it doesn't imply a
universal measurement—edit the help parameter where
parser_convert_backend.add_argument for "--atomic-virial" is defined to use the
revised wording.
In `@source/tests/infer/gen_spin.py`:
- Around line 112-115: The two calls to convert_backend are inconsistent:
convert_backend(INPUT=yaml_path, OUTPUT=pth_path) omits atomic_virial while
convert_backend(INPUT=yaml_path, OUTPUT=pt2_path, atomic_virial=True) sets it,
which will produce differing virial outputs; update the first call that writes
pth_path to include atomic_virial=True for parity (or, if the asymmetry is
intentional, add a concise inline comment next to the convert_backend call for
pth_path explaining that the .pth export intentionally omits the flag because it
always computes atomic virial), referencing the convert_backend function and the
INPUT/OUTPUT/atomic_virial parameters and the pth_path/pt2_path variables so
reviewers can locate the change.
In `@source/tests/pt_expt/test_training.py`:
- Around line 64-74: The test_compile_warns_dpa1_with_attention test contains an
inline descriptor identical to the module-level _DESCRIPTOR_DPA1_WITH_ATTN
constant; replace the inline dictionary in
test_compile_warns_dpa1_with_attention with a reference to
_DESCRIPTOR_DPA1_WITH_ATTN so the test uses the shared constant (ensure you
import or reference the symbol in the same scope and remove the duplicated
literal).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: e15a07e8-f80f-46d0-83be-1d11ef300a67
📒 Files selected for processing (18)
deepmd/entrypoints/convert_backend.pydeepmd/main.pydeepmd/pt_expt/train/training.pydeepmd/pt_expt/utils/serialization.pysource/api_cc/include/DeepPotPTExpt.hsource/api_cc/include/DeepSpinPTExpt.hsource/api_cc/src/DeepPotPTExpt.ccsource/api_cc/src/DeepSpinPTExpt.ccsource/api_cc/src/commonPTExpt.hsource/tests/infer/gen_dpa1.pysource/tests/infer/gen_dpa2.pysource/tests/infer/gen_dpa3.pysource/tests/infer/gen_fparam_aparam.pysource/tests/infer/gen_model_devi.pysource/tests/infer/gen_sea.pysource/tests/infer/gen_spin.pysource/tests/infer/gen_spin_model_devi.pysource/tests/pt_expt/test_training.py
💤 Files with no reviewable changes (1)
- deepmd/pt_expt/train/training.py
| "seed": 1, | ||
| } | ||
|
|
||
| _DESCRIPTOR_DPA1_WITH_ATTN = { |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #5407 +/- ##
==========================================
- Coverage 80.46% 79.30% -1.17%
==========================================
Files 820 820
Lines 86005 85972 -33
Branches 4139 4137 -2
==========================================
- Hits 69207 68182 -1025
- Misses 15521 16515 +994
+ Partials 1277 1275 -2 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Instead of truncating the neighbor list to sum(sel) in C++ (which drops neighbors without distance sorting), make the nnei dimension dynamic in torch.export. The .pt2 model's compiled format_nlist now sorts by distance on-device and truncates to sum(sel), matching the .pth behavior. C++ createNlistTensor pads to at least nnei+1 columns so the format_nlist sort branch is always active.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
source/api_cc/src/DeepSpinPTExpt.cc (2)
322-326: Also cachefirstneigh_tensorwhenago != 0.
nlist_data.jlistis only mutated inside theif (ago == 0)block, so for subsequent MD steps with the same neighbor list this rebuilds the exact same[1, nloc, nnei+1]int32 tensor, materializes it on CPU, and copies it to the device every step. Same rationale as themapping_tensorcaching just above — worth extending to avoid the per-step H2D copy.♻️ Proposed caching (mirrors mapping_tensor)
if (ago == 0) { nlist_data.copy_from_nlist(lmp_list, nall - nghost); nlist_data.shuffle_exclude_empty(fwd_map); nlist_data.padding(); + firstneigh_tensor = createNlistTensor(nlist_data.jlist, nnei + 1) + .to(torch::kInt64) + .to(device); // Rebuild mapping tensor only when nlist is updated (ago == 0). ... } - // Build raw nlist tensor ... - at::Tensor firstneigh_tensor = createNlistTensor(nlist_data.jlist, nnei + 1) - .to(torch::kInt64) - .to(device);(Declare
firstneigh_tensoras a member alongsidemapping_tensor.)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@source/api_cc/src/DeepSpinPTExpt.cc` around lines 322 - 326, The code currently rebuilds and H2D-copies firstneigh_tensor each step even when ago != 0; change firstneigh_tensor to be a persistent member (like mapping_tensor), only recreate and assign it inside the block where ago == 0 (using createNlistTensor(nlist_data.jlist, nnei + 1).to(torch::kInt64).to(device)), and on subsequent steps reuse the cached member instead of recreating it; ensure the member is cleared/updated when nlist_data.jlist actually changes and that usages of firstneigh_tensor in functions downstream reference the member variable.
296-321: Optional: collapse the twomapping_tensorbranches.The only difference between the
lmp_list.mappingand identity branches is howmapping[ii]is populated; thefrom_blob → clone → to(device) → assign to mapping_tensortail is identical. Factoring out the tail would reduce duplication and the risk of future drift between the two paths.♻️ Proposed
- if (lmp_list.mapping) { - std::vector<std::int64_t> mapping(nall_real); - for (int ii = 0; ii < nall_real; ii++) { - mapping[ii] = fwd_map[lmp_list.mapping[bkw_map[ii]]]; - } - mapping_tensor = - torch::from_blob(mapping.data(), {1, nall_real}, int_option) - .clone() - .to(device); - } else { - std::vector<std::int64_t> mapping(nall_real); - for (int ii = 0; ii < nall_real; ii++) { - mapping[ii] = ii; - } - mapping_tensor = - torch::from_blob(mapping.data(), {1, nall_real}, int_option) - .clone() - .to(device); - } + std::vector<std::int64_t> mapping(nall_real); + if (lmp_list.mapping) { + for (int ii = 0; ii < nall_real; ii++) { + mapping[ii] = fwd_map[lmp_list.mapping[bkw_map[ii]]]; + } + } else { + for (int ii = 0; ii < nall_real; ii++) { + mapping[ii] = ii; + } + } + mapping_tensor = + torch::from_blob(mapping.data(), {1, nall_real}, int_option) + .clone() + .to(device);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@source/api_cc/src/DeepSpinPTExpt.cc` around lines 296 - 321, Collapse duplication by building the std::vector<std::int64_t> mapping once and then doing the torch::from_blob(...).clone().to(device) tail only once; specifically, in the ago==0 block create mapping sized nall_real, fill it using either the lmp_list.mapping path (mapping[ii] = fwd_map[lmp_list.mapping[bkw_map[ii]]]) or the identity path (mapping[ii] = ii) based on lmp_list.mapping, then call torch::from_blob(mapping.data(), {1, nall_real}, int_option).clone().to(device) to assign mapping_tensor; keep references to nall_real, bkw_map, fwd_map, int_option and device unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@source/api_cc/src/DeepSpinPTExpt.cc`:
- Around line 412-418: The atomic-virial capability check (do_atomic_virial)
must be performed before invoking run_model to avoid wasted inference: in the
compute methods that handle atomic output (the branch gated by the atomic flag
in DeepSpinPTExpt::compute and the standalone path around the run_model call),
move the existing do_atomic_virial check to the top of the atomic-capable branch
immediately after you detect atomic==true (i.e., before building input tensors
and calling run_model), and remove the later redundant check that currently runs
after run_model returns; you may also consider performing this validation in
init() if convenient, but the minimal fix is to hoist the check above run_model
in the compute paths referencing run_model and do_atomic_virial.
- Around line 135-151: The loader currently falls back do_atomic_virial and
computes nnei but does not detect older .pt2 graphs that lack the new
format_nlist padding, which can cause silent incorrect inference; update
DeepSpinPTExpt.cc to read a metadata schema/version or explicitly check for a
"format_nlist" flag in metadata during load (before creating/passing raw padded
neighbor lists) and reject the archive with a clear error if the graph was
exported pre-format_nlist (or if metadata version < required_version),
referencing the existing symbols/do_atomic_virial, nnei, and format_nlist to
locate the logic that computes nnei and constructs neighbor lists, so
incompatible .pt2 files are detected and loading aborted rather than proceeding
silently.
---
Nitpick comments:
In `@source/api_cc/src/DeepSpinPTExpt.cc`:
- Around line 322-326: The code currently rebuilds and H2D-copies
firstneigh_tensor each step even when ago != 0; change firstneigh_tensor to be a
persistent member (like mapping_tensor), only recreate and assign it inside the
block where ago == 0 (using createNlistTensor(nlist_data.jlist, nnei +
1).to(torch::kInt64).to(device)), and on subsequent steps reuse the cached
member instead of recreating it; ensure the member is cleared/updated when
nlist_data.jlist actually changes and that usages of firstneigh_tensor in
functions downstream reference the member variable.
- Around line 296-321: Collapse duplication by building the
std::vector<std::int64_t> mapping once and then doing the
torch::from_blob(...).clone().to(device) tail only once; specifically, in the
ago==0 block create mapping sized nall_real, fill it using either the
lmp_list.mapping path (mapping[ii] = fwd_map[lmp_list.mapping[bkw_map[ii]]]) or
the identity path (mapping[ii] = ii) based on lmp_list.mapping, then call
torch::from_blob(mapping.data(), {1, nall_real}, int_option).clone().to(device)
to assign mapping_tensor; keep references to nall_real, bkw_map, fwd_map,
int_option and device unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 4c50c5df-2c04-49b8-844c-05064e64a26b
📒 Files selected for processing (7)
deepmd/pt_expt/utils/serialization.pysource/api_cc/src/DeepPotPTExpt.ccsource/api_cc/src/DeepSpinPTExpt.ccsource/api_cc/src/commonPTExpt.hsource/tests/pt_expt/export_helpers.pysource/tests/pt_expt/model/test_ener_model.pysource/tests/pt_expt/model/test_export_pipeline.py
🚧 Files skipped from review as they are similar to previous changes (1)
- source/api_cc/src/DeepPotPTExpt.cc
| if (metadata.obj_val.count("do_atomic_virial")) { | ||
| do_atomic_virial = metadata["do_atomic_virial"].as_bool(); | ||
| } else { | ||
| // Older models without this field were exported with do_atomic_virial=True | ||
| do_atomic_virial = true; | ||
| } | ||
|
|
||
| // Read expected nnei (= sum(sel)) — the .pt2 graph has this dimension static. | ||
| if (metadata.obj_val.count("nnei")) { | ||
| nnei = metadata["nnei"].as_int(); | ||
| } else { | ||
| // Fallback: compute from sel array | ||
| nnei = 0; | ||
| for (const auto& v : metadata["sel"].as_array()) { | ||
| nnei += v.as_int(); | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check whether any prior .pt2 export path recorded a version/schema field,
# and whether the graph already contained format_nlist before this PR.
rg -nP --type=py -C3 '\b(schema_version|pt2_version|format_version|pte_version)\b'
rg -nP --type=py -C3 '\bformat_nlist\s*\('
rg -nP --type=py -C3 'extra/metadata\.json|metadata\.json'Repository: deepmodeling/deepmd-kit
Length of output: 25950
🏁 Script executed:
# Check what metadata fields are being collected in serialization
sed -n '248,300p' deepmd/pt_expt/utils/serialization.py
rg -A 20 '_collect_metadata' deepmd/pt_expt/utils/serialization.py | head -50Repository: deepmodeling/deepmd-kit
Length of output: 4166
🏁 Script executed:
# Check the DeepSpinPTExpt.cc around the metadata loading section
sed -n '120,170p' source/api_cc/src/DeepSpinPTExpt.ccRepository: deepmodeling/deepmd-kit
Length of output: 1830
🏁 Script executed:
# Look for any version or compatibility checks in the C++ code
rg -n 'version|Version|schema|Schema' source/api_cc/src/DeepSpinPTExpt.ccRepository: deepmodeling/deepmd-kit
Length of output: 49
🏁 Script executed:
# Check if there are any recent changes or git history related to format_nlist in this PR
git log --oneline --all -20 -- source/api_cc/src/DeepSpinPTExpt.cc 2>/dev/null || echo "Git history not available"Repository: deepmodeling/deepmd-kit
Length of output: 208
🏁 Script executed:
# Search for where do_atomic_virial is being set in metadata during export
rg -n 'do_atomic_virial' deepmd/pt_expt/ -B 2 -A 2Repository: deepmodeling/deepmd-kit
Length of output: 37990
🏁 Script executed:
# Check if do_atomic_virial is being set anywhere in serialization
rg -n 'do_atomic_virial' deepmd/ source/ -B 1 -A 1Repository: deepmodeling/deepmd-kit
Length of output: 50380
🏁 Script executed:
# Check the full context of where metadata is prepared before export in serialization.py
sed -n '530,620p' deepmd/pt_expt/utils/serialization.pyRepository: deepmodeling/deepmd-kit
Length of output: 2737
🏁 Script executed:
# Look for nlist handling and format_nlist in the C++ inference code
rg -n 'format_nlist|createNlistTensor|buildTypeSortedNlist' source/api_cc/ -B 2 -A 2Repository: deepmodeling/deepmd-kit
Length of output: 6902
🏁 Script executed:
# Check git history for when do_atomic_virial was added to metadata
cd source/api_cc/src && git log --all --oneline -p -- DeepSpinPTExpt.cc 2>/dev/null | grep -A 5 -B 5 "do_atomic_virial" | head -60 || echo "Git history limited"Repository: deepmodeling/deepmd-kit
Length of output: 1541
🏁 Script executed:
# Look for any version/compatibility markers in the codebase
rg -n 'backward|compat|legacy|version' source/api_cc/src/DeepSpinPTExpt.cc source/api_cc/src/commonPTExpt.h -i -B 2 -A 2Repository: deepmodeling/deepmd-kit
Length of output: 49
🏁 Script executed:
# Check if the compiled .pt2 graph definitely includes format_nlist or if it's optional
rg -n 'format_nlist' source/api_cc/src/ -A 3 -B 3 | head -100Repository: deepmodeling/deepmd-kit
Length of output: 4510
🏁 Script executed:
# Check if there's any model version tracking being added
rg -n 'format_nlist' deepmd/pt_expt/utils/serialization.py -B 5 -A 5Repository: deepmodeling/deepmd-kit
Length of output: 564
🏁 Script executed:
# Check when format_nlist was introduced to the pt_expt backend
git log --all --oneline --grep="format_nlist" 2>/dev/null | head -10 || echo "Limited git history"Repository: deepmodeling/deepmd-kit
Length of output: 752
🏁 Script executed:
# Check if there are any tests or documentation about backward compatibility with old .pt2 files
rg -n 'backward|old.*model|legacy.*pt2|version.*pt2' --type=py source/ deepmd/ -i | head -20Repository: deepmodeling/deepmd-kit
Length of output: 1723
🏁 Script executed:
# Check if there's any validation or version checking logic for .pt2 files before loading
rg -n 'load.*pt2|read.*pt2|version' source/api_cc/src/DeepSpinPTExpt.cc deepmd/pt_expt/infer/deep_eval.py -B 2 -A 2 | head -50Repository: deepmodeling/deepmd-kit
Length of output: 1532
🏁 Script executed:
# Check if the trace-time padding ensures format_nlist will be in the compiled graph
sed -n '140,160p' deepmd/pt_expt/utils/serialization.pyRepository: deepmodeling/deepmd-kit
Length of output: 984
The default do_atomic_virial=true fallback is correct, but consider adding a version check for old .pt2 files lacking format_nlist.
The do_atomic_virial field is being correctly written to metadata in deepmd/pt_expt/utils/serialization.py (line 541), and the fallback to true for older models is appropriate. However, this PR introduces format_nlist into the compiled .pt2 graph (via trace-time padding at export time). Old .pt2 files exported before this change would not have format_nlist compiled in, yet the C++ runtime in DeepSpinPTExpt.cc (lines 323–326, 602–604) creates raw padded neighbor lists and passes them to the compiled graph assuming format_nlist exists. Loading such legacy models could produce silent incorrect results.
Consider adding a version or schema field to the metadata to distinguish pre- and post-format_nlist .pt2 files, and reject incompatible archives at load time rather than failing silently at inference.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@source/api_cc/src/DeepSpinPTExpt.cc` around lines 135 - 151, The loader
currently falls back do_atomic_virial and computes nnei but does not detect
older .pt2 graphs that lack the new format_nlist padding, which can cause silent
incorrect inference; update DeepSpinPTExpt.cc to read a metadata schema/version
or explicitly check for a "format_nlist" flag in metadata during load (before
creating/passing raw padded neighbor lists) and reject the archive with a clear
error if the graph was exported pre-format_nlist (or if metadata version <
required_version), referencing the existing symbols/do_atomic_virial, nnei, and
format_nlist to locate the logic that computes nnei and constructs neighbor
lists, so incompatible .pt2 files are detected and loading aborted rather than
proceeding silently.
There was a problem hiding this comment.
Older .pt2 models are handled by fallback defaults: do_atomic_virial defaults to true (matching the old export behavior) and nnei defaults to sum(sel) when the metadata field is absent. There is no versioning scheme in the .pt2 format to do more than this.
There was a problem hiding this comment.
@wanghan-iapcm, understood — the fallback defaults cover the older model behavior as closely as the format allows. Accepting this as the intended backward-compatibility strategy.
(˳˘ ɜ˘)˳ ♡
✏️ Learnings added
Learnt from: wanghan-iapcm
Repo: deepmodeling/deepmd-kit PR: 5407
File: source/api_cc/src/DeepSpinPTExpt.cc:135-151
Timestamp: 2026-04-20T13:36:43.594Z
Learning: In `deepmd-kit`, the `.pt2` format has no versioning scheme. Backward compatibility for older `.pt2` models in `DeepSpinPTExpt.cc` and `DeepPotPTExpt.cc` is handled purely via metadata fallback defaults: `do_atomic_virial` defaults to `true` (matching old export behavior) and `nnei` defaults to `sum(sel)` when the `nnei` metadata field is absent. No version/schema field should be expected in `.pt2` metadata.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
The .pt2 model's compiled format_nlist expects n_nnei > sum(sel) to trigger the sort branch. Pad the nlist with one extra -1 column before calling the AOTInductor runner.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
deepmd/pt_expt/infer/deep_eval.py (1)
762-774: Deduplicate the nlist padding block.The same 13-line padding snippet is repeated verbatim in
_eval_model(Line 762-774) and_eval_model_spin(Line 914-926). Consider extracting a small helper (e.g._pad_nlist_for_pt2(nlist_t)) to keep the two call sites in sync and document the "n_nnei > nnei triggers format_nlist sort branch" invariant in one place.♻️ Proposed refactor
+ `@staticmethod` + def _pad_nlist_for_pt2(nlist_t: torch.Tensor) -> torch.Tensor: + """Pad nlist with one extra -1 column so n_nnei > nnei, ensuring + format_nlist's compiled sort branch executes in the .pt2 model.""" + pad = -torch.ones( + (*nlist_t.shape[:2], 1), dtype=nlist_t.dtype, device=nlist_t.device + ) + return torch.cat([nlist_t, pad], dim=-1)Then at both call sites:
- nlist_t = torch.cat( - [ - nlist_t, - -torch.ones( - (*nlist_t.shape[:2], 1), - dtype=nlist_t.dtype, - device=nlist_t.device, - ), - ], - dim=-1, - ) + nlist_t = self._pad_nlist_for_pt2(nlist_t)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/pt_expt/infer/deep_eval.py` around lines 762 - 774, Duplicate 13-line padding of nlist_t in _eval_model and _eval_model_spin should be extracted to a single helper to avoid drift: create a small function (e.g. _pad_nlist_for_pt2(nlist_t)) that performs the torch.cat(...) padding of an extra -1 column and document the invariant "n_nnei > nnei triggers format_nlist sort branch" in that helper's docstring; then replace the inlined padding blocks in both _eval_model and _eval_model_spin with calls to _pad_nlist_for_pt2(nlist_t) so both call sites share the same implementation and comment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@deepmd/pt_expt/infer/deep_eval.py`:
- Around line 762-774: Duplicate 13-line padding of nlist_t in _eval_model and
_eval_model_spin should be extracted to a single helper to avoid drift: create a
small function (e.g. _pad_nlist_for_pt2(nlist_t)) that performs the
torch.cat(...) padding of an extra -1 column and document the invariant "n_nnei
> nnei triggers format_nlist sort branch" in that helper's docstring; then
replace the inlined padding blocks in both _eval_model and _eval_model_spin with
calls to _pad_nlist_for_pt2(nlist_t) so both call sites share the same
implementation and comment.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 836fbd50-361a-4482-88ee-8625bd33bb92
📒 Files selected for processing (1)
deepmd/pt_expt/infer/deep_eval.py
…orted --atomic-virial Move the do_atomic_virial check before run_model in both DeepPotPTExpt and DeepSpinPTExpt so we fail fast without wasting GPU computation. Also reject --atomic-virial flag in convert_backend when the output backend doesn't support it.
Summary
buildTypeSortedNlistwithcreateNlistTensor(data, expected_nnei)— avoids distance computation and type sorting every step; model's compiledformat_nlisthandles this on-devicedo_atomic_virial=Falseby default — avoids 3 extratorch.autograd.gradbackward passes; add--atomic-virialflag todp convert-backendmapping_tensoras member variable, only rebuild whenago == 0nneianddo_atomic_virialin .pt2 metadata for C++ to read at initBenchmark (V100-SXM2-16GB, 192-atom water, LAMMPS MD)
Before this PR
.pt2 was 9x slower than .pth due to CPU-side nlist sorting, baked-in atomic virial backward passes, and excessive clones:
After this PR
DPA1 L0 (se_atten nlayer=0)
DPA1 L2 (se_atten nlayer=2)
DPA2 (repinit + repformer)
More complex models benefit more from AOTInductor: DPA2 19-51% faster, DPA1 L2 37-40% faster, DPA1 L0 3-41% faster.
Test plan
🤖 Generated with Claude Code