Skip to content

Commit 31964ae

Browse files
author
OpenClaw
committed
doc: fix DPA-2 theory math rendering and minor heading typos
Authored by OpenClaw (model: gpt-5.3-codex)
1 parent 556759f commit 31964ae

1 file changed

Lines changed: 8 additions & 9 deletions

File tree

doc/model/dpa2.md

Lines changed: 8 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -14,30 +14,29 @@ DPA-2 is an attention-based descriptor architecture proposed for large atomic mo
1414

1515
At a high level, DPA-2 builds local representations with three coupled channels (paper notation):
1616

17-
- **Single-atom channel** $\mathbf{f}_lpha$
18-
- **Rotationally invariant pair channel** $\mathbf{g}_{lphaeta}$
19-
- **Rotationally equivariant pair channel** $\mathbf{h}_{lphaeta}$
17+
- **Single-atom channel** $\mathbf{f}_\alpha$
18+
- **Rotationally invariant pair channel** $\mathbf{g}_{\alpha\beta}$
19+
- **Rotationally equivariant pair channel** $\mathbf{h}_{\alpha\beta}$
2020

21-
for neighbors $eta\in\mathcal{N}(lpha)$ within cutoffs.
21+
for neighbors $\beta\in\mathcal{N}(\alpha)$ within cutoffs.
2222

2323
### Descriptor pipeline
2424

2525
The descriptor follows two main stages:
2626

2727
1. **repinit (representation initializer)**
2828
- Initializes and fuses type and geometry information from local environments.
29-
1. **repformer (representation transformer)**
29+
2. **repformer (representation transformer)**
3030
- Stacked message-passing layers that update $\mathbf{f}$ and $\mathbf{g}$ channels through convolution/symmetrization/MLP and attention-style interactions.
3131

3232
The final descriptor is formed from learned single-atom representations and then passed to downstream fitting/model components.
3333

34-
### Message passing intuition
34+
### Message-passing intuition
3535

3636
DPA-2 updates local features layer-by-layer with residual connections. Conceptually, each layer performs neighborhood aggregation using geometry-conditioned interactions:
3737

3838
```math
39-
\mathbf{h}_lpha^{(l+1)} = \mathbf{h}_lpha^{(l)} + \mathrm{MP}^{(l)}\left(\mathbf{h}_lpha^{(l)}, \{\mathbf{h}_eta^{(l)}\}_{eta\in\mathcal{N}(lpha)}, \{\mathbf{g}_{lphaeta}\}_{eta\in\mathcal{N}(lpha)}
40-
ight)
39+
\mathbf{h}_\alpha^{(l+1)} = \mathbf{h}_\alpha^{(l)} + \mathrm{MP}^{(l)}\left(\mathbf{h}_\alpha^{(l)}, \{\mathbf{h}_\beta^{(l)}\}_{\beta\in\mathcal{N}(\alpha)}, \{\mathbf{g}_{\alpha\beta}\}_{\beta\in\mathcal{N}(\alpha)}\right)
4140
```
4241

4342
where $\mathrm{MP}^{(l)}$ denotes the layer-specific message-passing operator.
@@ -65,7 +64,7 @@ If one runs LAMMPS with MPI, the customized OP library for the C++ interface sho
6564
If one runs LAMMPS with MPI and CUDA devices, it is recommended to compile the customized OP library for the C++ interface with a [CUDA-Aware MPI](https://developer.nvidia.com/mpi-solutions-gpus) library and CUDA,
6665
otherwise the communication between GPU cards falls back to the slower CPU implementation.
6766

68-
## Limiations of the JAX backend with LAMMPS {{ jax_icon }}
67+
## Limitations of the JAX backend with LAMMPS {{ jax_icon }}
6968

7069
When using the JAX backend, 2 or more MPI ranks are not supported. One must set `map` to `yes` using the [`atom_modify`](https://docs.lammps.org/atom_modify.html) command.
7170

0 commit comments

Comments
 (0)