- Download and install
Please follow our github webpage to download the latest released version and development version.
There various easy methods to install DeePMD-kit. Choose one that you prefer. If you want to build by yourself, jump to the next two sections.
After your easy installation, DeePMD-kit (dp
) and LAMMPS (lmp
) will be available to execute. You can try dp -h
and lmp -h
to see the help. mpirun
is also available considering you may want to run LAMMPS in parallel.
Both CPU and GPU version offline packages are avaiable in the Releases page.
DeePMD-kit is avaiable with conda. Install Anaconda or Miniconda first.
To install the CPU version:
conda install deepmd-kit=*=*cpu lammps-dp=*=*cpu -c deepmodeling
To install the GPU version containing CUDA 10.1:
conda install deepmd-kit=*=*gpu lammps-dp=*=*gpu -c deepmodeling
A docker for installing the DeePMD-kit is available here.
To pull the CPU version:
docker pull ghcr.io/deepmodeling/deepmd-kit:1.2.2_cpu
To pull the GPU version:
docker pull ghcr.io/deepmodeling/deepmd-kit:1.2.2_cuda10.1_gpu
First, check the python version on your machine
python --version
We follow the virtual environment approach to install the tensorflow's Python interface. The full instruction can be found on the tensorflow's official website. Now we assume that the Python interface will be installed to virtual environment directory $tensorflow_venv
virtualenv -p python3 $tensorflow_venv
source $tensorflow_venv/bin/activate
pip install --upgrade pip
pip install --upgrade tensorflow
It is highly recommanded to keep the consistency of the TensorFlow version for the python and C++ interfaces.
Everytime a new shell is started and one wants to use DeePMD-kit
, the virtual environment should be activated by
source $tensorflow_venv/bin/activate
if one wants to skip out of the virtual environment, he/she can do
deactivate
If one has multiple python interpreters named like python3.x, it can be specified by, for example
virtualenv -p python3.7 $tensorflow_venv
If one does not need the GPU support of deepmd-kit and is concerned about package size, the CPU-only version of tensorflow should be installed by
pip install --upgrade tensorflow-cpu
To verify the installation, run
python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
One should remember to activate the virtual environment every time he/she uses deepmd-kit.
Execute
pip install deepmd-kit
To test the installation, one may execute
dp -h
It will print the help information like
usage: dp [-h] {train,freeze,test} ...
DeePMD-kit: A deep learning package for many-body potential energy
representation and molecular dynamics
optional arguments:
-h, --help show this help message and exit
Valid subcommands:
{train,freeze,test}
train train a model
freeze freeze the model
test test the model
If one does not need to use DeePMD-kit with Lammps or I-Pi, then the python interface installed in the previous section does everything and he/she can safely skip this section.
Check the compiler version on your machine
gcc --version
The C++ interface of DeePMD-kit was tested with compiler gcc >= 4.8. It is noticed that the I-Pi support is only compiled with gcc >= 4.9.
First the C++ interface of Tensorflow should be installed. It is noted that the version of Tensorflow should be in consistent with the python interface. We assume that you have followed our instruction and installed tensorflow python interface 1.14.0 with, then you may follow the instruction for CPU to install the corresponding C++ interface (CPU only). If one wants GPU supports, he/she should follow the instruction for GPU to install the C++ interface.
Clone the DeePMD-kit source code
cd /some/workspace
git clone --recursive https://github.com/deepmodeling/deepmd-kit.git deepmd-kit
For convenience, you may want to record the location of source to a variable, saying deepmd_source_dir
by
cd deepmd-kit
deepmd_source_dir=`pwd`
Now goto the source code directory of DeePMD-kit and make a build place.
cd $deepmd_source_dir/source
mkdir build
cd build
I assume you want to install DeePMD-kit into path $deepmd_root
, then execute cmake
cmake -DTENSORFLOW_ROOT=$tensorflow_root -DCMAKE_INSTALL_PREFIX=$deepmd_root ..
where the variable tensorflow_root
stores the location where the tensorflow's C++ interface is installed. The DeePMD-kit will automatically detect if a CUDA tool-kit is available on your machine and build the GPU support accordingly. If you want to force the cmake to find CUDA tool-kit, you can speicify the key USE_CUDA_TOOLKIT
,
cmake -DUSE_CUDA_TOOLKIT=true -DTENSORFLOW_ROOT=$tensorflow_root -DCMAKE_INSTALL_PREFIX=$deepmd_root ..
and you may further asked to provide CUDA_TOOLKIT_ROOT_DIR
. If the cmake has executed successfully, then
make
make install
If everything works fine, you will have the following executable and libraries installed in $deepmd_root/bin
and $deepmd_root/lib
$ ls $deepmd_root/bin
dp_ipi
$ ls $deepmd_root/lib
libdeepmd_ipi.so libdeepmd_op.so libdeepmd.so
DeePMD-kit provide module for running MD simulation with LAMMPS. Now make the DeePMD-kit module for LAMMPS.
cd $deepmd_source_dir/source/build
make lammps
DeePMD-kit will generate a module called USER-DEEPMD
in the build
directory. Now download your favorite LAMMPS code, and uncompress it (I assume that you have downloaded the tar lammps-stable.tar.gz
)
cd /some/workspace
tar xf lammps-stable.tar.gz
The source code of LAMMPS is stored in directory, for example lammps-31Mar17
. Now go into the LAMMPS code and copy the DeePMD-kit module like this
cd lammps-31Mar17/src/
cp -r $deepmd_source_dir/source/build/USER-DEEPMD .
Now build LAMMPS
make yes-user-deepmd
make mpi -j4
The option -j4
means using 4 processes in parallel. You may want to use a different number according to your hardware.
If everything works fine, you will end up with an executable lmp_mpi
.
The DeePMD-kit module can be removed from LAMMPS source code by
make no-user-deepmd