You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
doc(pd): update paddle installation scripts and paddle related content in dpa3 document (#4887)
update paddle installation scripts and custom border op error message
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
* **Documentation**
* Updated installation guides to reference PaddlePaddle 3.1.1 for CUDA
12.6, CUDA 11.8, and CPU; added nightly pre-release install examples.
* Refined training docs wording and CINN note; added Paddle backend
guidance and explicit OP-install instructions in DPA3 docs.
* **Chores**
* Improved error messages when custom Paddle operators are unavailable,
adding clearer install instructions and links to documentation.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Signed-off-by: HydrogenSulfate <490868991@qq.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@@ -40,7 +40,11 @@ Virial RMSEs were averaged exclusively for systems containing virial labels (`Al
40
40
41
41
Note that we set `float32` in all DPA3 models, while `float64` in other models by default.
42
42
43
-
## Requirements of installation from source code {{ pytorch_icon }}
43
+
## Requirements of installation from source code {{ pytorch_icon }} {{ paddle_icon }}
44
+
45
+
::::{tab-set}
46
+
47
+
:::{tab-item} PyTorch {{ pytorch_icon }}
44
48
45
49
To run the DPA3 model on LAMMPS via source code installation
46
50
(users can skip this step if using [easy installation](../install/easy-install.md)),
@@ -53,6 +57,25 @@ If one runs LAMMPS with MPI, the customized OP library for the C++ interface sho
53
57
If one runs LAMMPS with MPI and CUDA devices, it is recommended to compile the customized OP library for the C++ interface with a [CUDA-Aware MPI](https://developer.nvidia.com/mpi-solutions-gpus) library and CUDA,
54
58
otherwise the communication between GPU cards falls back to the slower CPU implementation.
55
59
60
+
:::
61
+
62
+
:::{tab-item} Paddle {{ paddle_icon }}
63
+
64
+
The customized OP library for the Python interface can be installed by
65
+
66
+
```sh
67
+
cd deepmd-kit/source/op/pd
68
+
python setup.py install
69
+
```
70
+
71
+
If one runs LAMMPS with MPI, the customized OP library for the C++ interface should be compiled against the same MPI library as the runtime MPI.
72
+
If one runs LAMMPS with MPI and CUDA devices, it is recommended to compile the customized OP library for the C++ interface with a [CUDA-Aware MPI](https://developer.nvidia.com/mpi-solutions-gpus) library and CUDA,
73
+
otherwise the communication between GPU cards falls back to the slower CPU implementation.
74
+
75
+
:::
76
+
77
+
::::
78
+
56
79
## Limitations of the JAX backend with LAMMPS {{ jax_icon }}
57
80
58
81
When using the JAX backend, 2 or more MPI ranks are not supported. One must set `map` to `yes` using the [`atom_modify`](https://docs.lammps.org/atom_modify.html) command.
0 commit comments