Skip to content

Commit f40c694

Browse files
authored
pip install lammps module (deepmodeling#2186)
## add lammps module to the wheel In the next version, one can install DeePMD-kit using ```sh pip install deepmd-kit[gpu,lmp] ``` Then one can use `lmp` to run the LAMMPS program with `pair deepmd`. This works for Linux and macOS, and requires `libpython` installed to load TF libraries. The LAMMPS Python package was prebuilt in [my repo](https://github.com/njzjz/lammps-wheel). ## Build a Python package with LAMMPS module from the source ```sh export DP_LAMMPS_VERSION=stable_23Jun2022_update2 export DP_variant=cuda pip install -v . ``` But note that the same compilation condition (CXX ABI, MPI) should be used to build LAMMPS and its plugin. ## Other fixes - rename `op_abi` module to `deepmd_op` to share the same name between Python and C++ - fix macOS OP library suffix in api_cc: `libdeepmd_op.dylib` -> `libdeepmd_op.so` - fix Windows dlopen APIs in api_cc - fix compilation error on windows: `an explicit specialization or instantiation of a function template cannot have any default arguments` - avoid linking libdeepmd_lmp with the MPI library, which has been linked with the main LAMMPS Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu>
1 parent 19940da commit f40c694

17 files changed

Lines changed: 242 additions & 71 deletions

File tree

deepmd/env.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -376,7 +376,7 @@ def _get_package_constants(
376376
TF_VERSION = GLOBAL_CONFIG["tf_version"]
377377
TF_CXX11_ABI_FLAG = int(GLOBAL_CONFIG["tf_cxx11_abi_flag"])
378378

379-
op_module = get_module("op_abi")
379+
op_module = get_module("deepmd_op")
380380
op_grads_module = get_module("op_grads")
381381

382382
# FLOAT_PREC

deepmd/lmp.py

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
"""Register entry points for lammps-wheel."""
2+
import os
3+
import platform
4+
from pathlib import Path
5+
from typing import List, Optional
6+
7+
from find_libpython import find_libpython
8+
9+
from deepmd.env import tf
10+
11+
12+
def get_env(paths: List[Optional[str]]) -> str:
13+
"""Get the environment variable from given paths"""
14+
return ":".join((p for p in paths if p is not None))
15+
16+
17+
if platform.system() == "Linux":
18+
lib_env = "LD_LIBRARY_PATH"
19+
elif platform.system() == "Darwin":
20+
lib_env = "DYLD_FALLBACK_LIBRARY_PATH"
21+
else:
22+
raise RuntimeError("Unsupported platform")
23+
24+
tf_dir = tf.sysconfig.get_lib()
25+
op_dir = str((Path(__file__).parent / "op").absolute())
26+
# set LD_LIBRARY_PATH
27+
os.environ[lib_env] = get_env([
28+
os.environ.get(lib_env),
29+
tf_dir,
30+
os.path.join(tf_dir, "python"),
31+
op_dir,
32+
])
33+
34+
# preload python library
35+
libpython = find_libpython()
36+
if platform.system() == "Linux":
37+
preload_env = "LD_PRELOAD"
38+
elif platform.system() == "Darwin":
39+
preload_env = "DYLD_INSERT_LIBRARIES"
40+
else:
41+
raise RuntimeError("Unsupported platform")
42+
os.environ[preload_env] = get_env([
43+
os.environ.get(preload_env),
44+
libpython,
45+
])
46+
47+
def get_op_dir() -> str:
48+
"""Get the directory of the deepmd-kit OP library"""
49+
return op_dir

doc/install/easy-install.md

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ docker pull deepmodeling/dpmdkit-rocm:dp2.0.3-rocm4.5.2-tf2.6-lmp29Sep2021
8484

8585
## Install Python interface with pip
8686

87-
If you only want to install the Python interface and have no existing TensorFlow installed, you can use `pip` to install the pre-built package of the Python interface with CUDA 11 supported:
87+
If you have no existing TensorFlow installed, you can use `pip` to install the pre-built package of the Python interface with CUDA 11 supported:
8888

8989
```bash
9090
pip install deepmd-kit[gpu]
@@ -95,6 +95,13 @@ Or install the CPU version without CUDA supported:
9595
pip install deepmd-kit[cpu]
9696
```
9797

98+
[LAMMPS module](../third-party/lammps-command.md) is only provided on Linux and macOS. To enable it, add `lmp` to extras:
99+
```bash
100+
pip install deepmd-kit[gpu,lmp]
101+
```
102+
MPICH is required for parallel running.
103+
104+
It is suggested to install the package into an isolated environment.
98105
The supported platform includes Linux x86-64 and aarch64 with GNU C Library 2.28 or above, macOS x86-64, and Windows x86-64.
99106
A specific version of TensorFlow which is compatible with DeePMD-kit will be also installed.
100107

pyproject.toml

Lines changed: 18 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -49,25 +49,35 @@ repository = "https://github.com/deepmodeling/deepmd-kit"
4949
write_to = "deepmd/_version.py"
5050

5151
[tool.cibuildwheel]
52-
test-command = "dp -h"
53-
test-requires = "tensorflow"
52+
test-command = [
53+
"dp -h",
54+
]
55+
test-extras = ["cpu"]
5456
build = ["cp310-*"]
5557
skip = ["*-win32", "*-manylinux_i686", "*-musllinux*"]
5658
# TODO: bump to "latest" tag when CUDA supports GCC 12
5759
manylinux-x86_64-image = "quay.io/pypa/manylinux_2_28_x86_64:2022-11-19-1b19e81"
5860
manylinux-aarch64-image = "quay.io/pypa/manylinux_2_28_aarch64:2022-11-19-1b19e81"
5961

6062
[tool.cibuildwheel.macos]
63+
environment = { DP_LAMMPS_VERSION="stable_23Jun2022_update2" }
64+
before-all = ["brew install mpich"]
6165
repair-wheel-command = "delocate-wheel --require-archs {delocate_archs} -w {dest_dir} -v {wheel} --ignore-missing-dependencies"
66+
test-extras = ["cpu", "test", "lmp"]
67+
test-command = [
68+
"dp -h",
69+
"pytest {project}/source/tests/test_lammps.py"
70+
]
6271

6372
[tool.cibuildwheel.linux]
64-
repair-wheel-command = "auditwheel repair --exclude libtensorflow_framework.so.2 --exclude libtensorflow_framework.so.1 --exclude libtensorflow_framework.so -w {dest_dir} {wheel}"
73+
repair-wheel-command = "auditwheel repair --exclude libtensorflow_framework.so.2 --exclude libtensorflow_framework.so.1 --exclude libtensorflow_framework.so --exclude _pywrap_tensorflow_internal.so -w {dest_dir} {wheel}"
6574
environment-pass = ["CIBW_BUILD", "DP_VARIANT"]
66-
before-all = """
67-
if [ "$(uname -m)" = "x86_64" ]; then
68-
yum config-manager --add-repo http://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/cuda-rhel8.repo && yum install -y cuda-11-8
69-
fi
70-
"""
75+
environment = { DP_VARIANT="cuda", DP_LAMMPS_VERSION="stable_23Jun2022_update2", MPI_HOME="/usr/lib64/mpich", PATH="/usr/lib64/mpich/bin:$PATH" }
76+
before-all = [
77+
"""{ if [ "$(uname -m)" = "x86_64" ] ; then yum config-manager --add-repo http://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/cuda-rhel8.repo && yum install -y cuda-11-8; fi }""",
78+
"yum install -y mpich-devel",
79+
]
80+
7181

7282
# selectively turn of lintner warnings, always include reasoning why any warning should
7383
# be silenced

setup.py

Lines changed: 16 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,13 @@
3434
cmake_args.append("-DBUILD_TESTING:BOOL=TRUE")
3535
if os.environ.get("DP_ENABLE_NATIVE_OPTIMIZATION", "0") == "1":
3636
cmake_args.append("-DENABLE_NATIVE_OPTIMIZATION:BOOL=TRUE")
37+
dp_lammps_version = os.environ.get("DP_LAMMPS_VERSION", "")
38+
if dp_lammps_version != "":
39+
cmake_args.append("-DBUILD_CPP_IF:BOOL=TRUE")
40+
cmake_args.append("-DUSE_TF_PYTHON_LIBS:BOOL=TRUE")
41+
cmake_args.append(f"-DLAMMPS_VERSION={dp_lammps_version}")
42+
else:
43+
cmake_args.append("-DBUILD_CPP_IF:BOOL=FALSE")
3744

3845
tf_install_dir, _ = find_tensorflow()
3946
tf_version = get_tf_version(tf_install_dir)
@@ -73,7 +80,6 @@ def get_tag(self):
7380
cmake_args=[
7481
f"-DTENSORFLOW_ROOT:PATH={tf_install_dir}",
7582
"-DBUILD_PY_IF:BOOL=TRUE",
76-
"-DBUILD_CPP_IF:BOOL=FALSE",
7783
*cmake_args,
7884
],
7985
cmake_source_dir="source",
@@ -95,9 +101,17 @@ def get_tag(self):
95101
"sphinx-argparse",
96102
"pygments-lammps",
97103
],
104+
"lmp": [
105+
"lammps-manylinux-2-28~=2022.6.23.2.2; platform_system=='Linux'",
106+
"lammps~=2022.6.23.2.2; platform_system!='Linux'",
107+
"find_libpython",
108+
],
98109
**get_tf_requirement(tf_version),
99110
},
100-
entry_points={"console_scripts": ["dp = deepmd.entrypoints.main:main"]},
111+
entry_points={
112+
"console_scripts": ["dp = deepmd.entrypoints.main:main"],
113+
"lammps.plugins": ["deepmd = deepmd.lmp:get_op_dir"],
114+
},
101115
cmdclass = {
102116
"bdist_wheel": bdist_wheel_abi3,
103117
},

source/CMakeLists.txt

Lines changed: 10 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,11 @@ set(DEEPMD_SOURCE_DIR ${PROJECT_SOURCE_DIR}/..)
112112

113113
# setup tensorflow libraries by python
114114
if (USE_TF_PYTHON_LIBS)
115-
find_package (Python COMPONENTS Interpreter Development REQUIRED)
115+
if(NOT $ENV{CIBUILDWHEEL} STREQUAL "1")
116+
find_package (Python COMPONENTS Interpreter Development REQUIRED)
117+
else()
118+
set(Python_LIBRARIES ${Python_LIBRARY})
119+
endif()
116120
endif(USE_TF_PYTHON_LIBS)
117121

118122
# find tensorflow, I need tf abi info
@@ -149,8 +153,8 @@ endif()
149153

150154
# define names of libs
151155
set (LIB_DEEPMD "deepmd")
156+
set (LIB_DEEPMD_OP "deepmd_op")
152157
if (BUILD_CPP_IF)
153-
set (LIB_DEEPMD_OP "deepmd_op")
154158
set (LIB_DEEPMD_CC "deepmd_cc")
155159
set (LIB_DEEPMD_C "deepmd_c")
156160
if (USE_CUDA_TOOLKIT)
@@ -181,8 +185,10 @@ if (BUILD_CPP_IF)
181185
add_subdirectory (lmp/)
182186
if (CMAKE_CXX_COMPILER_VERSION VERSION_GREATER 4.8)
183187
# add_subdirectory (md/)
188+
if(NOT BUILD_PY_IF)
184189
add_subdirectory (ipi/)
185190
add_subdirectory (gmx/)
191+
endif()
186192
endif ()
187193
endif (BUILD_CPP_IF)
188194

@@ -205,7 +211,7 @@ add_custom_target(lammps
205211
COMMAND ${CMAKE_COMMAND} -P ${CMAKE_CURRENT_BINARY_DIR}/cmake_lammps.cmake)
206212

207213
# add configure file
208-
if(BUILD_CPP_IF)
214+
if(BUILD_CPP_IF AND NOT BUILD_PY_IF)
209215
include(CMakePackageConfigHelpers)
210216
set(targets_export_name ${CMAKE_PROJECT_NAME}Targets CACHE INTERNAL "")
211217
set(generated_dir "${CMAKE_CURRENT_BINARY_DIR}/generated" CACHE INTERNAL "")
@@ -220,4 +226,4 @@ if(BUILD_CPP_IF)
220226
"${config_file}" INSTALL_DESTINATION ${cmake_files_install_dir})
221227
install(FILES ${version_file} ${config_file}
222228
DESTINATION ${cmake_files_install_dir})
223-
endif(BUILD_CPP_IF)
229+
endif(BUILD_CPP_IF AND NOT BUILD_PY_IF)

source/api_c/CMakeLists.txt

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,12 @@ if (CMAKE_TESTING_ENABLED)
2323
target_link_libraries(${libname} PRIVATE coverage_config)
2424
endif()
2525

26+
if(BUILD_PY_IF)
27+
install(
28+
TARGETS ${libname}
29+
DESTINATION deepmd/op/
30+
)
31+
else(BUILD_PY_IF)
2632
install(
2733
TARGETS ${libname}
2834
EXPORT ${CMAKE_PROJECT_NAME}Targets
@@ -33,6 +39,7 @@ install(
3339
FILES ${INC_SRC}
3440
DESTINATION include/deepmd
3541
)
42+
endif(BUILD_PY_IF)
3643

3744
if (PACKAGE_C)
3845
MESSAGE(STATUS "Packaging C API library")

source/api_cc/CMakeLists.txt

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,12 @@ if (CMAKE_TESTING_ENABLED)
4242
endif()
4343
target_compile_features(${libname} PUBLIC cxx_std_11)
4444

45+
if(BUILD_PY_IF)
46+
install(
47+
TARGETS ${libname}
48+
DESTINATION deepmd/op/
49+
)
50+
else(BUILD_PY_IF)
4551
install(
4652
TARGETS ${libname}
4753
EXPORT ${CMAKE_PROJECT_NAME}Targets
@@ -64,3 +70,4 @@ ${CMAKE_INSTALL_PREFIX}/lib/${CMAKE_SHARED_LIBRARY_PREFIX}${libname}${LOW_PREC_V
6470
if (CMAKE_TESTING_ENABLED)
6571
add_subdirectory(tests)
6672
endif()
73+
endif(BUILD_PY_IF)

source/api_cc/src/DeepPot.cc

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ run_model <double, double> (ENERGYTYPE & dener,
8787
Session * session,
8888
const std::vector<std::pair<std::string, Tensor>> & input_tensors,
8989
const AtomMap& atommap,
90-
const int nghost = 0);
90+
const int nghost);
9191

9292
template
9393
void
@@ -97,7 +97,7 @@ run_model <double, float> (ENERGYTYPE & dener,
9797
Session * session,
9898
const std::vector<std::pair<std::string, Tensor>> & input_tensors,
9999
const AtomMap& atommap,
100-
const int nghost = 0);
100+
const int nghost);
101101

102102
template
103103
void
@@ -107,7 +107,7 @@ run_model <float, double> (ENERGYTYPE & dener,
107107
Session * session,
108108
const std::vector<std::pair<std::string, Tensor>> & input_tensors,
109109
const AtomMap& atommap,
110-
const int nghost = 0);
110+
const int nghost);
111111

112112
template
113113
void
@@ -117,7 +117,7 @@ run_model <float, float> (ENERGYTYPE & dener,
117117
Session * session,
118118
const std::vector<std::pair<std::string, Tensor>> & input_tensors,
119119
const AtomMap& atommap,
120-
const int nghost = 0);
120+
const int nghost);
121121

122122
template <typename MODELTYPE, typename VALUETYPE>
123123
static void run_model (ENERGYTYPE & dener,
@@ -210,7 +210,7 @@ void run_model <double, double> (ENERGYTYPE & dener,
210210
Session* session,
211211
const std::vector<std::pair<std::string, Tensor>> & input_tensors,
212212
const deepmd::AtomMap & atommap,
213-
const int& nghost = 0);
213+
const int& nghost);
214214

215215
template
216216
void run_model <double, float> (ENERGYTYPE & dener,
@@ -221,7 +221,7 @@ void run_model <double, float> (ENERGYTYPE & dener,
221221
Session* session,
222222
const std::vector<std::pair<std::string, Tensor>> & input_tensors,
223223
const deepmd::AtomMap & atommap,
224-
const int& nghost = 0);
224+
const int& nghost);
225225

226226
template
227227
void run_model <float, double> (ENERGYTYPE & dener,
@@ -232,7 +232,7 @@ void run_model <float, double> (ENERGYTYPE & dener,
232232
Session* session,
233233
const std::vector<std::pair<std::string, Tensor>> & input_tensors,
234234
const deepmd::AtomMap & atommap,
235-
const int& nghost = 0);
235+
const int& nghost);
236236

237237
template
238238
void run_model <float, float> (ENERGYTYPE & dener,
@@ -243,7 +243,7 @@ void run_model <float, float> (ENERGYTYPE & dener,
243243
Session* session,
244244
const std::vector<std::pair<std::string, Tensor>> & input_tensors,
245245
const deepmd::AtomMap & atommap,
246-
const int& nghost = 0);
246+
const int& nghost);
247247

248248
DeepPot::
249249
DeepPot ()

source/api_cc/src/common.cc

Lines changed: 17 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,22 @@
11
#include "common.h"
22
#include "AtomMap.h"
33
#include "device.h"
4-
#include <dlfcn.h>
54
#include <fcntl.h>
5+
#if defined(_WIN32)
6+
#if defined(_WIN32_WINNT)
7+
#undef _WIN32_WINNT
8+
#endif
9+
10+
// target Windows version is windows 7 and later
11+
#define _WIN32_WINNT _WIN32_WINNT_WIN7
12+
#define PSAPI_VERSION 2
13+
#include <windows.h>
14+
#include <io.h>
15+
#define O_RDONLY _O_RDONLY
16+
#else
17+
// not windows
18+
#include <dlfcn.h>
19+
#endif
620
#include "google/protobuf/text_format.h"
721
#include "google/protobuf/io/zero_copy_stream_impl.h"
822

@@ -299,10 +313,11 @@ deepmd::
299313
load_op_library()
300314
{
301315
tensorflow::Env* env = tensorflow::Env::Default();
302-
std::string dso_path = env->FormatLibraryFileName("deepmd_op", "");
303316
#if defined(_WIN32)
317+
std::string dso_path = "deepmd_op.dll";
304318
void* dso_handle = LoadLibrary(dso_path.c_str());
305319
#else
320+
std::string dso_path = "libdeepmd_op.so";
306321
void* dso_handle = dlopen(dso_path.c_str(), RTLD_NOW | RTLD_LOCAL);
307322
#endif
308323
if (!dso_handle) {

0 commit comments

Comments
 (0)