Running simulat
diff --git a/metadynamics/metadynamics-with-Q/index.html b/metadynamics/metadynamics-with-Q/index.html
index 254ac4f..9500bd7 100644
--- a/metadynamics/metadynamics-with-Q/index.html
+++ b/metadynamics/metadynamics-with-Q/index.html
@@ -102,9 +102,9 @@ In order to understand how to implement Q as a collective variable in PLUMED
one needs to understand how Q is defined:
$$Q(X) = \frac{1}{N}\sum_{\left(i,i\right)}\frac{1}{1+exp\left[\beta \left(r_{ij}(X)-\lambda r^{0}_{ij}\right)\right]}$$
-where the sum runs over the $N$ pairs of native contacts (i,j), $r_{ij}(X)$ is the
-distance between i and j in configuration $X$, $r^{0}_{ij}$ is the distance between
-$i$ and $j$ in the native state, $\beta$ is a smoothing parameter taken to be 5 ${AA}^{-1}$
+
where the sum runs over the N pairs of native contacts (i,j), r$_{ij}
$(X) is the
+distance between i and j in configuration X, $r^{0}{ij}$ is the distance between
+_i and j in the native state, $\beta$ is a smoothing parameter taken to be 5 ${AA}^{-1}$
and the factor $\lambda$ accounts for fluctuations when the contact is formed, taken to be
1.2.
diff --git a/search/search_index.json b/search/search_index.json
index c226f66..068db27 100644
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -1 +1 @@
-{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Welcome to BioKT-PLUMED-docs This website is a resource aimed at researchers from the BioKT lab to have working examples of different types of molecular dynamics simulations that can be run using Gromacs patched with PLUMED . These examples borrow extensively from the fantastic PLUMED masterclasses . However, what we show here is necessarily less exhaustive and simply intended to provide working examples of methods we routinely employ. Setup Simulation methods Umbrella sampling Metadynamics Simulations with multiple replicas Running simulations with PLUMED at DIPC","title":"Home"},{"location":"#welcome-to-biokt-plumed-docs","text":"This website is a resource aimed at researchers from the BioKT lab to have working examples of different types of molecular dynamics simulations that can be run using Gromacs patched with PLUMED . These examples borrow extensively from the fantastic PLUMED masterclasses . However, what we show here is necessarily less exhaustive and simply intended to provide working examples of methods we routinely employ.","title":"Welcome to BioKT-PLUMED-docs"},{"location":"#setup","text":"","title":"Setup"},{"location":"#simulation-methods","text":"Umbrella sampling Metadynamics Simulations with multiple replicas","title":"Simulation methods"},{"location":"#running-simulations-with-plumed-at-dipc","text":"","title":"Running simulations with PLUMED at DIPC"},{"location":"dipc/","text":"USE GROMACS + PLUMED IN @DIPC DIPC SYSTEMS For any information related with how DIPC computing system works (ATLAS), please see the corresponding web pages: * ATLAS * ATLAS EDR System * ATLAS FDR System The main difference between the 2 systems is that EDR has GPUs while FDR do not. For this reason, we will use for our purpose the EDR system . STARTING The first thing you need to do in Atlas for using any software is to load it. Use the command below to load the GROMACS software with the PLUMED plugin installed. To date: 13 / 03 / 2023 module load GROMACS/2021.3-fosscuda-2020b-PLUMED-2.7.2 The very basic Batch script for parallel execution of GROMACS is the following: #!/bin/bash #SBATCH --partition=regular #SBATCH --job-name=GROMACS_job #SBATCH --mem=200gb #SBATCH --cpus-per-task=6 #SBATCH --nodes=2 #SBATCH --ntasks-per-node=8 #SBATCH --output=%x-%j.out #SBATCH --error=%x-%j.err module load GROMACS/2021.3-fosscuda-2020b-PLUMED-2.7.2 srun gmx_mpi mdrun -ntomp ${SLURM_CPUS_PER_TASK} -s input.tpr For the use of GPUs you can use: #!/bin/bash #SBATCH --partition=regular #SBATCH --job-name=GROMACS_job #SBATCH --mem=200gb #SBATCH --cpus-per-task=1 #SBATCH --nodes=2 #SBATCH --ntasks-per-node=48 #SBATCH --gres=gpu:p40:2 #SBATCH --output=%x-%j.out #SBATCH --error=%x-%j.err module load GROMACS/2021.3-fosscuda-2020b-PLUMED-2.7.2 srun gmx_mpi mdrun -ntomp $SLURM_CPUS_PER_TASK -nb auto -bonded auto -pme auto -gpu_id 01 -s input.tpr","title":"PLUMED@DIPC"},{"location":"dipc/#use-gromacs-plumed-in-dipc","text":"","title":"USE GROMACS + PLUMED IN @DIPC"},{"location":"dipc/#dipc-systems","text":"For any information related with how DIPC computing system works (ATLAS), please see the corresponding web pages: * ATLAS * ATLAS EDR System * ATLAS FDR System The main difference between the 2 systems is that EDR has GPUs while FDR do not. For this reason, we will use for our purpose the EDR system .","title":"DIPC SYSTEMS"},{"location":"dipc/#starting","text":"The first thing you need to do in Atlas for using any software is to load it. Use the command below to load the GROMACS software with the PLUMED plugin installed. To date: 13 / 03 / 2023 module load GROMACS/2021.3-fosscuda-2020b-PLUMED-2.7.2 The very basic Batch script for parallel execution of GROMACS is the following: #!/bin/bash #SBATCH --partition=regular #SBATCH --job-name=GROMACS_job #SBATCH --mem=200gb #SBATCH --cpus-per-task=6 #SBATCH --nodes=2 #SBATCH --ntasks-per-node=8 #SBATCH --output=%x-%j.out #SBATCH --error=%x-%j.err module load GROMACS/2021.3-fosscuda-2020b-PLUMED-2.7.2 srun gmx_mpi mdrun -ntomp ${SLURM_CPUS_PER_TASK} -s input.tpr For the use of GPUs you can use: #!/bin/bash #SBATCH --partition=regular #SBATCH --job-name=GROMACS_job #SBATCH --mem=200gb #SBATCH --cpus-per-task=1 #SBATCH --nodes=2 #SBATCH --ntasks-per-node=48 #SBATCH --gres=gpu:p40:2 #SBATCH --output=%x-%j.out #SBATCH --error=%x-%j.err module load GROMACS/2021.3-fosscuda-2020b-PLUMED-2.7.2 srun gmx_mpi mdrun -ntomp $SLURM_CPUS_PER_TASK -nb auto -bonded auto -pme auto -gpu_id 01 -s input.tpr","title":"STARTING"},{"location":"replicas/","text":"Working with multiple replicas For a general description of how to integrate PLUMED when working with multiple replicas, check the PLUMED masterclass 21.5 . Note that for using this you will need both PLUMED and Gromacs to be installed using MPI. Check installation instructions here . REST2 Here we illustrate how to use the replica exchage with solute tempering method in its second iteration (i.e. REST2 ), which uses a different scaling scheme that the original REST . Before you start, please check the materials at the PLUMED website on Hamiltonian replica exchange . The first thing to check is whether all the files you will need are available in your folder. In the terminal, type $ ls data/replex You should find a number of files, including the Gromacs structure file alaTB_ff03_tip3p_npt.gro and another file called alaTB_ff03_tip3p_pp.top . The former is a properly equilibrated structure file, of the alanine dipeptide in TIP3P water. The latter is a pre-processed topology containing all the information required to run a simulation, and hence devoid of include directives . Note that in the [ atoms ] section, we have modified the standard names of the atoms with an underscore (i.e. HC is now HC_ ). This will mark these atoms as the solute whose interactions will be scaled in the REST2 scheme. The first step is hence to scale the interactions, and in order to do this, we use PLUMED `partial_tempering' program. Because we are running multiple replicas, each with a different scaling, the command will be run inside a loop in the following Python script: import sys, os gro = \"alaTB_ff03_tip3p_npt.gro\" toppp = \"alaTB_ff03_tip3p_pp.top\" mdp = \"sd_nvt.mdp\" # scale solute interactions nrep = 4 tmax = 1000 t0 = 300 for i in range(nrep): exponent = i/(nrep - 1) ti = t0*(tmax/t0)**exponent lmbd = t0/ti folder = \"rep%i\"%i try: os.mkdir(folder) except OSError as e: print (e) command = \"plumed partial_tempering %g < %s > %s/scaled.top\"%(lmbd, toppp, folder) os.system(command) top = \"%s/scaled.top\"%folder tpr = \"%s/alaTB_ff03_tip3p_nvt.tpr\"%folder command = gmx + \" grompp -f %s -p %s -c %s -o %s\"%(mdp, top, gro, tpr) os.system(command) In your machine, gmx should point to a working MPI installation of Gromacs. This script will generate a number of folders, each with a scaled topology file scaled.top . Because we have generated the run input files for Gromacs, we can simply run the simulations using mpiexec -np 4 gmx_mpi mdrun -s alaTB_ff03_tip3p_nvt -multidir rep* -nsteps 1000000 -plumed ../plumed.dat -hrex -replex 500 -v Note that, as I did, you may need the flag --mca opal_cuda_support 1 for running the simulations with GPU support. You can next verify whether your replica scheme has been successful demuxing the trajectories using the demux.pl script included in the Gromacs distribution: perl demux.pl rep0/md.log Again, you will likely need to provide the full path to the Perl script. Additionally, you can analyze the results to verify how much additional sampling has been done through the replica exchange scheme. We have included a Python notebook with some rudimentary analysis.","title":"Multiple replicas"},{"location":"replicas/#working-with-multiple-replicas","text":"For a general description of how to integrate PLUMED when working with multiple replicas, check the PLUMED masterclass 21.5 . Note that for using this you will need both PLUMED and Gromacs to be installed using MPI. Check installation instructions here .","title":"Working with multiple replicas"},{"location":"replicas/#rest2","text":"Here we illustrate how to use the replica exchage with solute tempering method in its second iteration (i.e. REST2 ), which uses a different scaling scheme that the original REST . Before you start, please check the materials at the PLUMED website on Hamiltonian replica exchange . The first thing to check is whether all the files you will need are available in your folder. In the terminal, type $ ls data/replex You should find a number of files, including the Gromacs structure file alaTB_ff03_tip3p_npt.gro and another file called alaTB_ff03_tip3p_pp.top . The former is a properly equilibrated structure file, of the alanine dipeptide in TIP3P water. The latter is a pre-processed topology containing all the information required to run a simulation, and hence devoid of include directives . Note that in the [ atoms ] section, we have modified the standard names of the atoms with an underscore (i.e. HC is now HC_ ). This will mark these atoms as the solute whose interactions will be scaled in the REST2 scheme. The first step is hence to scale the interactions, and in order to do this, we use PLUMED `partial_tempering' program. Because we are running multiple replicas, each with a different scaling, the command will be run inside a loop in the following Python script: import sys, os gro = \"alaTB_ff03_tip3p_npt.gro\" toppp = \"alaTB_ff03_tip3p_pp.top\" mdp = \"sd_nvt.mdp\" # scale solute interactions nrep = 4 tmax = 1000 t0 = 300 for i in range(nrep): exponent = i/(nrep - 1) ti = t0*(tmax/t0)**exponent lmbd = t0/ti folder = \"rep%i\"%i try: os.mkdir(folder) except OSError as e: print (e) command = \"plumed partial_tempering %g < %s > %s/scaled.top\"%(lmbd, toppp, folder) os.system(command) top = \"%s/scaled.top\"%folder tpr = \"%s/alaTB_ff03_tip3p_nvt.tpr\"%folder command = gmx + \" grompp -f %s -p %s -c %s -o %s\"%(mdp, top, gro, tpr) os.system(command) In your machine, gmx should point to a working MPI installation of Gromacs. This script will generate a number of folders, each with a scaled topology file scaled.top . Because we have generated the run input files for Gromacs, we can simply run the simulations using mpiexec -np 4 gmx_mpi mdrun -s alaTB_ff03_tip3p_nvt -multidir rep* -nsteps 1000000 -plumed ../plumed.dat -hrex -replex 500 -v Note that, as I did, you may need the flag --mca opal_cuda_support 1 for running the simulations with GPU support. You can next verify whether your replica scheme has been successful demuxing the trajectories using the demux.pl script included in the Gromacs distribution: perl demux.pl rep0/md.log Again, you will likely need to provide the full path to the Perl script. Additionally, you can analyze the results to verify how much additional sampling has been done through the replica exchange scheme. We have included a Python notebook with some rudimentary analysis.","title":"REST2"},{"location":"setup/","text":"Setting up PLUMED with Gromacs First of all you must patch the Gromacs code with PLUMED. There are multiple options available for you. Installing with conda If you only want a hassle-free installation to try things out, then you can simply use the pre-compiled versions made available by the PLUMED developers for their masterclasses. Here I follow the instructions from the first masterclass In the terminal, type $ conda create --name plumed-masterclass and then $ conda activate plumed-masterclass $ conda install -c conda-forge plumed py-plumed numpy pandas matplotlib notebook mdtraj mdanalysis git $ conda install --strict-channel-priority -c plumed/label/masterclass -c conda-forge gromacs Patch Gromacs with PLUMED First install PLUMED. In order to do this you will have to download and unzip the software $ wget https://github.com/plumed/plumed2/archive/refs/heads/master.zip $ unzip master.zip $ cd plumed2-master/ Then you will follow the usual steps to install software using Make $ ./configure --prefix=/usr/local $ make $ sudo make install You will also have to make sure that the installed libraries are visible in your path export LD_LIBRARY_PATH=/lib:/usr/lib:/usr/local/lib The next step is to install Gromacs. First download the code and unzip the package. In this case I am patching the 2021.6 version of Gromacs. wget ftp://ftp.gromacs.org/gromacs/gromacs-2021.6.tar.gz tar -xvf gromacs-2021.6.tar.gz The next thing I did was to check whether I could use Cmake to install with my desired options, in this case running with GPUs and compiling with MPI support cd gromacs-2021.6/ mkdir build cd build/ cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_MPI=on -DGMX_GPU=CUDA -DCMAKE_INSTALL_PREFIX=/opt/gromacs/2021.6 make Then comes the key step of running the plumed patch command cd .. plumed patch -p After that you can go back to your Gromacs install directory and finish the installation as usual cd build/ cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_MPI=on -DGMX_GPU=CUDA -DCMAKE_INSTALL_PREFIX=/opt/gromacs/2021.6 make sudo make install /opt/gromacs/2021.6/bin/gmx_mpi mdrun -h","title":"Setup"},{"location":"setup/#setting-up-plumed-with-gromacs","text":"First of all you must patch the Gromacs code with PLUMED. There are multiple options available for you.","title":"Setting up PLUMED with Gromacs"},{"location":"setup/#installing-with-conda","text":"If you only want a hassle-free installation to try things out, then you can simply use the pre-compiled versions made available by the PLUMED developers for their masterclasses. Here I follow the instructions from the first masterclass In the terminal, type $ conda create --name plumed-masterclass and then $ conda activate plumed-masterclass $ conda install -c conda-forge plumed py-plumed numpy pandas matplotlib notebook mdtraj mdanalysis git $ conda install --strict-channel-priority -c plumed/label/masterclass -c conda-forge gromacs","title":"Installing with conda"},{"location":"setup/#patch-gromacs-with-plumed","text":"First install PLUMED. In order to do this you will have to download and unzip the software $ wget https://github.com/plumed/plumed2/archive/refs/heads/master.zip $ unzip master.zip $ cd plumed2-master/ Then you will follow the usual steps to install software using Make $ ./configure --prefix=/usr/local $ make $ sudo make install You will also have to make sure that the installed libraries are visible in your path export LD_LIBRARY_PATH=/lib:/usr/lib:/usr/local/lib The next step is to install Gromacs. First download the code and unzip the package. In this case I am patching the 2021.6 version of Gromacs. wget ftp://ftp.gromacs.org/gromacs/gromacs-2021.6.tar.gz tar -xvf gromacs-2021.6.tar.gz The next thing I did was to check whether I could use Cmake to install with my desired options, in this case running with GPUs and compiling with MPI support cd gromacs-2021.6/ mkdir build cd build/ cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_MPI=on -DGMX_GPU=CUDA -DCMAKE_INSTALL_PREFIX=/opt/gromacs/2021.6 make Then comes the key step of running the plumed patch command cd .. plumed patch -p After that you can go back to your Gromacs install directory and finish the installation as usual cd build/ cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_MPI=on -DGMX_GPU=CUDA -DCMAKE_INSTALL_PREFIX=/opt/gromacs/2021.6 make sudo make install /opt/gromacs/2021.6/bin/gmx_mpi mdrun -h","title":"Patch Gromacs with PLUMED"},{"location":"umbrella/","text":"Umbrella sampling with PLUMED Alanine dipeptide in vacuum First of all, we will download the data required to run the calculations, which should be in the data/umbrella folder of this repository. It should contain the following files from the 21.3 PLUMED masterclass . foo@bar:~$ ls data/umbrella/ ala_dipeptide_analysis.ipynb topolA.tpr wham.py reference.pdb topolB.tpr Using the typical Gromacs syntax, you could use these tpr files to run MD simulations using foo@bar:~$ gmx mdrun -s topolA.tpr -nsteps 200000 -x traj_unbiased.xtc This will result in an unbiased simulation trajectory of the alanine dipeptide. When we run simulations with the PLUMED we will use an additional flag -plumed that points to the unique input required to bias the simulations. Below, you can find an example input file to estimate the values of the \\( \\phi \\) and \\( \\psi \\) dihedrals and histogram their values. MOLINFO STRUCTURE=reference.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 # make histograms hhphi: HISTOGRAM ARG=phi STRIDE=100 GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 ffphi: CONVERT_TO_FES GRID=hhphi DUMPGRID GRID=ffphi FILE=fes_phi.dat STRIDE=200000 hhpsi: HISTOGRAM ARG=psi STRIDE=100 GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 ffpsi: CONVERT_TO_FES GRID=hhpsi DUMPGRID GRID=ffpsi FILE=fes_psi.dat STRIDE=200000 PRINT ARG=phi,psi FILE=colvar.dat STRIDE=100 Note that we are using a number of actions ( TORSION , HISTOGRAM and DUMPGRID , to name a few). You should familiarize yourself with their syntax . Introducing an umbrella bias in this script is very easy. It just requires adding an extra line to the script. For example, to restrain the simulation with a harmonic potential at 0 radians with a spring constant of 200, we would write # add restraining potential bb: RESTRAINT ARG=phi KAPPA=200.0 AT=0 Of course, we will be interested in running multiple window umbrella sampling, and hence the structure of our directory should change with as many plumed files as umbrellas windows. To generate 32 PLUMED input files, we can use the following Python script: at=np.linspace(-np.pi,np.pi,33)[:-1] print(at) for i in range(32): with open(\"plumed_\" + str(i) + \".dat\",\"w\") as f: print(\"\"\" # vim:ft=plumed MOLINFO STRUCTURE=reference.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 bb: RESTRAINT ARG=phi KAPPA=200.0 AT={} PRINT ARG=phi,psi,bb.bias FILE=colvar_multi_{}.dat STRIDE=100 \"\"\".format(at[i],i),file=f) And then we will run the 32 simulations foo@bar:~$ for i in $(seq 0 1 31) do gmx mdrun -plumed plumed_${i}.dat -s topolA.tpr -nsteps 200000 -x traj_comp_${i}.xtc done The result from this will be a dataset of 32 simulations, all initialized at the same conformation, but with umbrella potentials that will bias the sampling to different regions of conformational space. After completion of the runs, we must analyze the resulting trajectories. In order to do that we concatenate the trajectories foo@bar:~$ gmx trjcat -cat -f `for i in $(seq 0 1 31); do echo \"traj_comp_${i}.xtc\"; done` -o traj_multi_cat.xtc Then, we analyze them using plumed driver: foo@bar:~$ for i in $(seq 0 1 31) do plumed driver --plumed plumed_${i}.dat --ixtc traj_multi_cat.xtc --trajectory-stride 100 done To visualize the output of the simulations, we can read the colvar_multi_\\*.dat files and see what region of the Ramachandran map has been sampled by each of the runs. Using a Jupyter-notebook we can run the following import plumed import wham col=[] for i in range(32): col.append(plumed.read_as_pandas(\"colvar_multi_\" + str(i)+\".dat\")) bias = np.zeros((len(col[0][\"bb.bias\"]),32)) for i in range(32): bias[:,i] = col[i][\"bb.bias\"][-len(bias):] w = wham.wham(bias,T=kBT) colvar = col[0] colvar[\"logweights\"] = w[\"logW\"] plumed.write_pandas(colvar,\"bias_multi.dat\") We then write a file called plumed_multi.dat to obtain reweighted free energy surfaces. The file should look as follows: # vim:ft=plumed phi: READ FILE=bias_multi.dat VALUES=phi IGNORE_TIME psi: READ FILE=bias_multi.dat VALUES=psi IGNORE_TIME lw: READ FILE=bias_multi.dat VALUES=logweights IGNORE_TIME hhphi: HISTOGRAM ARG=phi GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 ffphi: CONVERT_TO_FES GRID=hhphi DUMPGRID GRID=ffphi FILE=fes_phi_cat.dat hhpsi: HISTOGRAM ARG=psi GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 ffpsi: CONVERT_TO_FES GRID=hhpsi DUMPGRID GRID=ffpsi FILE=fes_psi_cat.dat hhphir: HISTOGRAM ARG=phi GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 LOGWEIGHTS=lw ffphir: CONVERT_TO_FES GRID=hhphir DUMPGRID GRID=ffphir FILE=fes_phi_catr.dat hhpsir: HISTOGRAM ARG=psi GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 LOGWEIGHTS=lw ffpsir: CONVERT_TO_FES GRID=hhpsir DUMPGRID GRID=ffpsir FILE=fes_psi_catr.dat This script is again read by plumed driver, and from this program we get our outputs. foo@bar:~$ plumed driver --plumed plumed_multi.dat --ixtc traj_multi_cat.xtc --trajectory-stride 100 --kt 2.4943387854 In the folder where you have found the data to run these tests you can find a script that will let you plot the free energy landscapes as estimated from umbrella sampling and compare them with those from the equilibrium MD trajectory. Clearly, using umbrella sampling we have been able to cover much more ground for our reaction coordinates, while obtaining a very consistent PMF in the regions that actually matter. One of the assignments in the masterclass is to run from a different initial state, that corresponding to the topolB.tpr Gromacs input file. Running with multiple replicas Typically, if we want to do molecular simulations using umbrella sampling, we will take advantage of the parallel capabilities of modern computers. Additionally, we may able to swap conformations between umbrella windows in order to enhance the sampling. This may be helpful to avoid getting trapped in local minima. The specifics on how to do this using PLUMED in Gromacs are described in the first part of the PLUMED 21.5 masterclass . In order to take advantage of this, the software must have been compiled using MPI. The PLUMED input file for this type of calculation is only incrementally more complicated than the one used before MOLINFO STRUCTURE=../reference.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 bb: RESTRAINT ARG=phi KAPPA=200.0 AT=@replicas:-3.141,-2.945,-2.748,-2.552,-2.356,-2.159,-1.963,-1.767,-1.570,-1.374,-1.178,-0.981,-0.785,-0.589,-0.392,-0.196,0.0,0.196,0.392,0.589,0.785,0.981,1.178,1.374,1.570,1.767,1.963,2.159,2.356,2.552,2.748,2.945 PRINT ARG=phi,psi,bb.bias FILE=../colvar_multi.dat STRIDE=100 Note the AT=@replicas section in the RESTRAINT action. In this case, we must generate as many directories (here termed run0 , run1 , and so on), each of which will contain the tpr file. Simulations are then run using foo@bar:~$ mpiexec -np 32 --oversubscribe gmx_mpi mdrun -multidir run* -plumed ../plumed.dat -s topolA.tpr -replex 200 -nsteps 200000 While much of the analysis will remain unchanged from the case of running each umbrella window independently, it is important to carefully check in the demuxed trajectories whether the different copies of the system diffuse across windows. Some copies may remain isolated from the rest, which may have catastrophic results.","title":"Umbrella sampling"},{"location":"umbrella/#umbrella-sampling-with-plumed","text":"","title":"Umbrella sampling with PLUMED"},{"location":"umbrella/#alanine-dipeptide-in-vacuum","text":"First of all, we will download the data required to run the calculations, which should be in the data/umbrella folder of this repository. It should contain the following files from the 21.3 PLUMED masterclass . foo@bar:~$ ls data/umbrella/ ala_dipeptide_analysis.ipynb topolA.tpr wham.py reference.pdb topolB.tpr Using the typical Gromacs syntax, you could use these tpr files to run MD simulations using foo@bar:~$ gmx mdrun -s topolA.tpr -nsteps 200000 -x traj_unbiased.xtc This will result in an unbiased simulation trajectory of the alanine dipeptide. When we run simulations with the PLUMED we will use an additional flag -plumed that points to the unique input required to bias the simulations. Below, you can find an example input file to estimate the values of the \\( \\phi \\) and \\( \\psi \\) dihedrals and histogram their values. MOLINFO STRUCTURE=reference.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 # make histograms hhphi: HISTOGRAM ARG=phi STRIDE=100 GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 ffphi: CONVERT_TO_FES GRID=hhphi DUMPGRID GRID=ffphi FILE=fes_phi.dat STRIDE=200000 hhpsi: HISTOGRAM ARG=psi STRIDE=100 GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 ffpsi: CONVERT_TO_FES GRID=hhpsi DUMPGRID GRID=ffpsi FILE=fes_psi.dat STRIDE=200000 PRINT ARG=phi,psi FILE=colvar.dat STRIDE=100 Note that we are using a number of actions ( TORSION , HISTOGRAM and DUMPGRID , to name a few). You should familiarize yourself with their syntax . Introducing an umbrella bias in this script is very easy. It just requires adding an extra line to the script. For example, to restrain the simulation with a harmonic potential at 0 radians with a spring constant of 200, we would write # add restraining potential bb: RESTRAINT ARG=phi KAPPA=200.0 AT=0 Of course, we will be interested in running multiple window umbrella sampling, and hence the structure of our directory should change with as many plumed files as umbrellas windows. To generate 32 PLUMED input files, we can use the following Python script: at=np.linspace(-np.pi,np.pi,33)[:-1] print(at) for i in range(32): with open(\"plumed_\" + str(i) + \".dat\",\"w\") as f: print(\"\"\" # vim:ft=plumed MOLINFO STRUCTURE=reference.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 bb: RESTRAINT ARG=phi KAPPA=200.0 AT={} PRINT ARG=phi,psi,bb.bias FILE=colvar_multi_{}.dat STRIDE=100 \"\"\".format(at[i],i),file=f) And then we will run the 32 simulations foo@bar:~$ for i in $(seq 0 1 31) do gmx mdrun -plumed plumed_${i}.dat -s topolA.tpr -nsteps 200000 -x traj_comp_${i}.xtc done The result from this will be a dataset of 32 simulations, all initialized at the same conformation, but with umbrella potentials that will bias the sampling to different regions of conformational space. After completion of the runs, we must analyze the resulting trajectories. In order to do that we concatenate the trajectories foo@bar:~$ gmx trjcat -cat -f `for i in $(seq 0 1 31); do echo \"traj_comp_${i}.xtc\"; done` -o traj_multi_cat.xtc Then, we analyze them using plumed driver: foo@bar:~$ for i in $(seq 0 1 31) do plumed driver --plumed plumed_${i}.dat --ixtc traj_multi_cat.xtc --trajectory-stride 100 done To visualize the output of the simulations, we can read the colvar_multi_\\*.dat files and see what region of the Ramachandran map has been sampled by each of the runs. Using a Jupyter-notebook we can run the following import plumed import wham col=[] for i in range(32): col.append(plumed.read_as_pandas(\"colvar_multi_\" + str(i)+\".dat\")) bias = np.zeros((len(col[0][\"bb.bias\"]),32)) for i in range(32): bias[:,i] = col[i][\"bb.bias\"][-len(bias):] w = wham.wham(bias,T=kBT) colvar = col[0] colvar[\"logweights\"] = w[\"logW\"] plumed.write_pandas(colvar,\"bias_multi.dat\") We then write a file called plumed_multi.dat to obtain reweighted free energy surfaces. The file should look as follows: # vim:ft=plumed phi: READ FILE=bias_multi.dat VALUES=phi IGNORE_TIME psi: READ FILE=bias_multi.dat VALUES=psi IGNORE_TIME lw: READ FILE=bias_multi.dat VALUES=logweights IGNORE_TIME hhphi: HISTOGRAM ARG=phi GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 ffphi: CONVERT_TO_FES GRID=hhphi DUMPGRID GRID=ffphi FILE=fes_phi_cat.dat hhpsi: HISTOGRAM ARG=psi GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 ffpsi: CONVERT_TO_FES GRID=hhpsi DUMPGRID GRID=ffpsi FILE=fes_psi_cat.dat hhphir: HISTOGRAM ARG=phi GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 LOGWEIGHTS=lw ffphir: CONVERT_TO_FES GRID=hhphir DUMPGRID GRID=ffphir FILE=fes_phi_catr.dat hhpsir: HISTOGRAM ARG=psi GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 LOGWEIGHTS=lw ffpsir: CONVERT_TO_FES GRID=hhpsir DUMPGRID GRID=ffpsir FILE=fes_psi_catr.dat This script is again read by plumed driver, and from this program we get our outputs. foo@bar:~$ plumed driver --plumed plumed_multi.dat --ixtc traj_multi_cat.xtc --trajectory-stride 100 --kt 2.4943387854 In the folder where you have found the data to run these tests you can find a script that will let you plot the free energy landscapes as estimated from umbrella sampling and compare them with those from the equilibrium MD trajectory. Clearly, using umbrella sampling we have been able to cover much more ground for our reaction coordinates, while obtaining a very consistent PMF in the regions that actually matter. One of the assignments in the masterclass is to run from a different initial state, that corresponding to the topolB.tpr Gromacs input file.","title":"Alanine dipeptide in vacuum"},{"location":"umbrella/#running-with-multiple-replicas","text":"Typically, if we want to do molecular simulations using umbrella sampling, we will take advantage of the parallel capabilities of modern computers. Additionally, we may able to swap conformations between umbrella windows in order to enhance the sampling. This may be helpful to avoid getting trapped in local minima. The specifics on how to do this using PLUMED in Gromacs are described in the first part of the PLUMED 21.5 masterclass . In order to take advantage of this, the software must have been compiled using MPI. The PLUMED input file for this type of calculation is only incrementally more complicated than the one used before MOLINFO STRUCTURE=../reference.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 bb: RESTRAINT ARG=phi KAPPA=200.0 AT=@replicas:-3.141,-2.945,-2.748,-2.552,-2.356,-2.159,-1.963,-1.767,-1.570,-1.374,-1.178,-0.981,-0.785,-0.589,-0.392,-0.196,0.0,0.196,0.392,0.589,0.785,0.981,1.178,1.374,1.570,1.767,1.963,2.159,2.356,2.552,2.748,2.945 PRINT ARG=phi,psi,bb.bias FILE=../colvar_multi.dat STRIDE=100 Note the AT=@replicas section in the RESTRAINT action. In this case, we must generate as many directories (here termed run0 , run1 , and so on), each of which will contain the tpr file. Simulations are then run using foo@bar:~$ mpiexec -np 32 --oversubscribe gmx_mpi mdrun -multidir run* -plumed ../plumed.dat -s topolA.tpr -replex 200 -nsteps 200000 While much of the analysis will remain unchanged from the case of running each umbrella window independently, it is important to carefully check in the demuxed trajectories whether the different copies of the system diffuse across windows. Some copies may remain isolated from the rest, which may have catastrophic results.","title":"Running with multiple replicas"},{"location":"metadynamics/metadynamics-with-Q/","text":"\\usepackage{amsmath} Defining native contacts (Q) as collective variable In order to understand how to implement Q as a collective variable in PLUMED one needs to understand how Q is defined: $$Q(X) = \\frac{1}{N}\\sum_{\\left(i,i\\right)}\\frac{1}{1+exp\\left[\\beta \\left(r_{ij}(X)-\\lambda r^{0}_{ij}\\right)\\right]}$$ where the sum runs over the $N$ pairs of native contacts (i,j) , $r_{ij}(X)$ is the distance between i and j in configuration $X$, $r^{0}_{ij}$ is the distance between $i$ and $j$ in the native state, $\\beta$ is a smoothing parameter taken to be 5 ${AA}^{-1}$ and the factor $\\lambda$ accounts for fluctuations when the contact is formed, taken to be 1.2.","title":"Defining Q as collective variable"},{"location":"metadynamics/metadynamics-with-Q/#defining-native-contacts-q-as-collective-variable","text":"In order to understand how to implement Q as a collective variable in PLUMED one needs to understand how Q is defined: $$Q(X) = \\frac{1}{N}\\sum_{\\left(i,i\\right)}\\frac{1}{1+exp\\left[\\beta \\left(r_{ij}(X)-\\lambda r^{0}_{ij}\\right)\\right]}$$ where the sum runs over the $N$ pairs of native contacts (i,j) , $r_{ij}(X)$ is the distance between i and j in configuration $X$, $r^{0}_{ij}$ is the distance between $i$ and $j$ in the native state, $\\beta$ is a smoothing parameter taken to be 5 ${AA}^{-1}$ and the factor $\\lambda$ accounts for fluctuations when the contact is formed, taken to be 1.2.","title":"Defining native contacts (Q) as collective variable"},{"location":"metadynamics/metadynamics/","text":"Metadynamics simulations using PLUMED One of the key functionalities from PLUMED is the possibility of running simulations using metadynamics. The contents from this section are borrowed from masterclass 21.4 , where the methodology is introduced in simple terms. Alanine dipeptide in vacuum To run this example, you will need the input files in the data folder from this repository. Then, write a plumed.dat file with the following contents MOLINFO STRUCTURE=dialaA.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 metad: METAD ARG=phi ... PACE=500 HEIGHT=1.2 BIASFACTOR=8 SIGMA=0.3 FILE=HILLS GRID_MIN=-pi GRID_MAX=pi ... PRINT ARG=phi,psi FILE=colvarA.dat STRIDE=10 The key difference with umbrella sampling is that we are biasing with the METAD action. This allows us to explore the whole Ramachandran map in a single run. Again, we can simply run our simulation using the usual Gromacs command with an additional -plumed flag foo@bar:~$ gmx mdrun -s topolA.tpr --deffnm metadA -plumed plumed.dat -nsteps 10000000 The most important differences come in the analysis. The simulation has generated a file HILLS including the information corresponding to the adaptive biases added during the simulation. We can estimate the free energy surface on the biasing coordinate using the sum_hills command foo@bar:~$ plumed sum_hills --hills HILLS --stride 500 --mintozero This results in a series of files winth free energy surfaces on the collective variable with increasingly large amounts of data. Hopefully, this will show that the free energy estimation is well converged. Additionally, we can use the driver , for more analysis. For example, we can write the following script to reweight the simulation data and obtain the energy surface not only on the biased coordinate but also on other coordinates (in this case, the psi torsion angle) MOLINFO STRUCTURE=dialaA.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 metad: METAD ARG=phi ... PACE=500 HEIGHT=1.2 BIASFACTOR=8 SIGMA=0.3 FILE=HILLS GRID_MIN=-pi GRID_MAX=pi RESTART=YES ... bias: REWEIGHT_BIAS ARG=metad.bias hhphi: HISTOGRAM ARG=phi STRIDE=50 GRID_MIN=-pi GRID_MAX=pi GRID_BIN=50 BANDWIDTH=0.05 LOGWEIGHTS=bias hhpsi: HISTOGRAM ARG=psi STRIDE=50 GRID_MIN=-pi GRID_MAX=pi GRID_BIN=50 BANDWIDTH=0.05 LOGWEIGHTS=bias ffphi: CONVERT_TO_FES GRID=hhphi ffpsi: CONVERT_TO_FES GRID=hhpsi DUMPGRID GRID=ffphi FILE=ffphi.dat DUMPGRID GRID=ffpsi FILE=ffpsi.dat PRINT ARG=phi,psi,metad.bias FILE=colvar_reweight.dat STRIDE=1 Then, on the command line, we run foo@bar:~$ plumed driver --mf_xtc metadA.xtc --plumed plumed_reweight.dat --kt 2.494339 The results are output to the ffphi.dat and ffpsi.dat files, the latter containing a notably noisy free energy landscape. We may want to overcome this problem adding a bias in two independent coordinates. MOLINFO STRUCTURE=dialaA.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 metad: METAD ARG=phi,psi ... PACE=500 HEIGHT=1.2 BIASFACTOR=8 SIGMA=0.3,0.3 FILE=HILLS2d GRID_MIN=-pi,-pi GRID_MAX=pi,pi ... PRINT ARG=phi,psi FILE=colvarA2d.dat STRIDE=10 Again, the HILLS file can be analyzed using plumed sum_hills to produce a smooth two dimensional free energy landscape.","title":"Alanine dipeptide in vacuum"},{"location":"metadynamics/metadynamics/#metadynamics-simulations-using-plumed","text":"One of the key functionalities from PLUMED is the possibility of running simulations using metadynamics. The contents from this section are borrowed from masterclass 21.4 , where the methodology is introduced in simple terms.","title":"Metadynamics simulations using PLUMED"},{"location":"metadynamics/metadynamics/#alanine-dipeptide-in-vacuum","text":"To run this example, you will need the input files in the data folder from this repository. Then, write a plumed.dat file with the following contents MOLINFO STRUCTURE=dialaA.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 metad: METAD ARG=phi ... PACE=500 HEIGHT=1.2 BIASFACTOR=8 SIGMA=0.3 FILE=HILLS GRID_MIN=-pi GRID_MAX=pi ... PRINT ARG=phi,psi FILE=colvarA.dat STRIDE=10 The key difference with umbrella sampling is that we are biasing with the METAD action. This allows us to explore the whole Ramachandran map in a single run. Again, we can simply run our simulation using the usual Gromacs command with an additional -plumed flag foo@bar:~$ gmx mdrun -s topolA.tpr --deffnm metadA -plumed plumed.dat -nsteps 10000000 The most important differences come in the analysis. The simulation has generated a file HILLS including the information corresponding to the adaptive biases added during the simulation. We can estimate the free energy surface on the biasing coordinate using the sum_hills command foo@bar:~$ plumed sum_hills --hills HILLS --stride 500 --mintozero This results in a series of files winth free energy surfaces on the collective variable with increasingly large amounts of data. Hopefully, this will show that the free energy estimation is well converged. Additionally, we can use the driver , for more analysis. For example, we can write the following script to reweight the simulation data and obtain the energy surface not only on the biased coordinate but also on other coordinates (in this case, the psi torsion angle) MOLINFO STRUCTURE=dialaA.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 metad: METAD ARG=phi ... PACE=500 HEIGHT=1.2 BIASFACTOR=8 SIGMA=0.3 FILE=HILLS GRID_MIN=-pi GRID_MAX=pi RESTART=YES ... bias: REWEIGHT_BIAS ARG=metad.bias hhphi: HISTOGRAM ARG=phi STRIDE=50 GRID_MIN=-pi GRID_MAX=pi GRID_BIN=50 BANDWIDTH=0.05 LOGWEIGHTS=bias hhpsi: HISTOGRAM ARG=psi STRIDE=50 GRID_MIN=-pi GRID_MAX=pi GRID_BIN=50 BANDWIDTH=0.05 LOGWEIGHTS=bias ffphi: CONVERT_TO_FES GRID=hhphi ffpsi: CONVERT_TO_FES GRID=hhpsi DUMPGRID GRID=ffphi FILE=ffphi.dat DUMPGRID GRID=ffpsi FILE=ffpsi.dat PRINT ARG=phi,psi,metad.bias FILE=colvar_reweight.dat STRIDE=1 Then, on the command line, we run foo@bar:~$ plumed driver --mf_xtc metadA.xtc --plumed plumed_reweight.dat --kt 2.494339 The results are output to the ffphi.dat and ffpsi.dat files, the latter containing a notably noisy free energy landscape. We may want to overcome this problem adding a bias in two independent coordinates. MOLINFO STRUCTURE=dialaA.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 metad: METAD ARG=phi,psi ... PACE=500 HEIGHT=1.2 BIASFACTOR=8 SIGMA=0.3,0.3 FILE=HILLS2d GRID_MIN=-pi,-pi GRID_MAX=pi,pi ... PRINT ARG=phi,psi FILE=colvarA2d.dat STRIDE=10 Again, the HILLS file can be analyzed using plumed sum_hills to produce a smooth two dimensional free energy landscape.","title":"Alanine dipeptide in vacuum"}]}
\ No newline at end of file
+{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Welcome to BioKT-PLUMED-docs This website is a resource aimed at researchers from the BioKT lab to have working examples of different types of molecular dynamics simulations that can be run using Gromacs patched with PLUMED . These examples borrow extensively from the fantastic PLUMED masterclasses . However, what we show here is necessarily less exhaustive and simply intended to provide working examples of methods we routinely employ. Setup Simulation methods Umbrella sampling Metadynamics Simulations with multiple replicas Running simulations with PLUMED at DIPC","title":"Home"},{"location":"#welcome-to-biokt-plumed-docs","text":"This website is a resource aimed at researchers from the BioKT lab to have working examples of different types of molecular dynamics simulations that can be run using Gromacs patched with PLUMED . These examples borrow extensively from the fantastic PLUMED masterclasses . However, what we show here is necessarily less exhaustive and simply intended to provide working examples of methods we routinely employ.","title":"Welcome to BioKT-PLUMED-docs"},{"location":"#setup","text":"","title":"Setup"},{"location":"#simulation-methods","text":"Umbrella sampling Metadynamics Simulations with multiple replicas","title":"Simulation methods"},{"location":"#running-simulations-with-plumed-at-dipc","text":"","title":"Running simulations with PLUMED at DIPC"},{"location":"dipc/","text":"USE GROMACS + PLUMED IN @DIPC DIPC SYSTEMS For any information related with how DIPC computing system works (ATLAS), please see the corresponding web pages: * ATLAS * ATLAS EDR System * ATLAS FDR System The main difference between the 2 systems is that EDR has GPUs while FDR do not. For this reason, we will use for our purpose the EDR system . STARTING The first thing you need to do in Atlas for using any software is to load it. Use the command below to load the GROMACS software with the PLUMED plugin installed. To date: 13 / 03 / 2023 module load GROMACS/2021.3-fosscuda-2020b-PLUMED-2.7.2 The very basic Batch script for parallel execution of GROMACS is the following: #!/bin/bash #SBATCH --partition=regular #SBATCH --job-name=GROMACS_job #SBATCH --mem=200gb #SBATCH --cpus-per-task=6 #SBATCH --nodes=2 #SBATCH --ntasks-per-node=8 #SBATCH --output=%x-%j.out #SBATCH --error=%x-%j.err module load GROMACS/2021.3-fosscuda-2020b-PLUMED-2.7.2 srun gmx_mpi mdrun -ntomp ${SLURM_CPUS_PER_TASK} -s input.tpr For the use of GPUs you can use: #!/bin/bash #SBATCH --partition=regular #SBATCH --job-name=GROMACS_job #SBATCH --mem=200gb #SBATCH --cpus-per-task=1 #SBATCH --nodes=2 #SBATCH --ntasks-per-node=48 #SBATCH --gres=gpu:p40:2 #SBATCH --output=%x-%j.out #SBATCH --error=%x-%j.err module load GROMACS/2021.3-fosscuda-2020b-PLUMED-2.7.2 srun gmx_mpi mdrun -ntomp $SLURM_CPUS_PER_TASK -nb auto -bonded auto -pme auto -gpu_id 01 -s input.tpr","title":"PLUMED@DIPC"},{"location":"dipc/#use-gromacs-plumed-in-dipc","text":"","title":"USE GROMACS + PLUMED IN @DIPC"},{"location":"dipc/#dipc-systems","text":"For any information related with how DIPC computing system works (ATLAS), please see the corresponding web pages: * ATLAS * ATLAS EDR System * ATLAS FDR System The main difference between the 2 systems is that EDR has GPUs while FDR do not. For this reason, we will use for our purpose the EDR system .","title":"DIPC SYSTEMS"},{"location":"dipc/#starting","text":"The first thing you need to do in Atlas for using any software is to load it. Use the command below to load the GROMACS software with the PLUMED plugin installed. To date: 13 / 03 / 2023 module load GROMACS/2021.3-fosscuda-2020b-PLUMED-2.7.2 The very basic Batch script for parallel execution of GROMACS is the following: #!/bin/bash #SBATCH --partition=regular #SBATCH --job-name=GROMACS_job #SBATCH --mem=200gb #SBATCH --cpus-per-task=6 #SBATCH --nodes=2 #SBATCH --ntasks-per-node=8 #SBATCH --output=%x-%j.out #SBATCH --error=%x-%j.err module load GROMACS/2021.3-fosscuda-2020b-PLUMED-2.7.2 srun gmx_mpi mdrun -ntomp ${SLURM_CPUS_PER_TASK} -s input.tpr For the use of GPUs you can use: #!/bin/bash #SBATCH --partition=regular #SBATCH --job-name=GROMACS_job #SBATCH --mem=200gb #SBATCH --cpus-per-task=1 #SBATCH --nodes=2 #SBATCH --ntasks-per-node=48 #SBATCH --gres=gpu:p40:2 #SBATCH --output=%x-%j.out #SBATCH --error=%x-%j.err module load GROMACS/2021.3-fosscuda-2020b-PLUMED-2.7.2 srun gmx_mpi mdrun -ntomp $SLURM_CPUS_PER_TASK -nb auto -bonded auto -pme auto -gpu_id 01 -s input.tpr","title":"STARTING"},{"location":"replicas/","text":"Working with multiple replicas For a general description of how to integrate PLUMED when working with multiple replicas, check the PLUMED masterclass 21.5 . Note that for using this you will need both PLUMED and Gromacs to be installed using MPI. Check installation instructions here . REST2 Here we illustrate how to use the replica exchage with solute tempering method in its second iteration (i.e. REST2 ), which uses a different scaling scheme that the original REST . Before you start, please check the materials at the PLUMED website on Hamiltonian replica exchange . The first thing to check is whether all the files you will need are available in your folder. In the terminal, type $ ls data/replex You should find a number of files, including the Gromacs structure file alaTB_ff03_tip3p_npt.gro and another file called alaTB_ff03_tip3p_pp.top . The former is a properly equilibrated structure file, of the alanine dipeptide in TIP3P water. The latter is a pre-processed topology containing all the information required to run a simulation, and hence devoid of include directives . Note that in the [ atoms ] section, we have modified the standard names of the atoms with an underscore (i.e. HC is now HC_ ). This will mark these atoms as the solute whose interactions will be scaled in the REST2 scheme. The first step is hence to scale the interactions, and in order to do this, we use PLUMED `partial_tempering' program. Because we are running multiple replicas, each with a different scaling, the command will be run inside a loop in the following Python script: import sys, os gro = \"alaTB_ff03_tip3p_npt.gro\" toppp = \"alaTB_ff03_tip3p_pp.top\" mdp = \"sd_nvt.mdp\" # scale solute interactions nrep = 4 tmax = 1000 t0 = 300 for i in range(nrep): exponent = i/(nrep - 1) ti = t0*(tmax/t0)**exponent lmbd = t0/ti folder = \"rep%i\"%i try: os.mkdir(folder) except OSError as e: print (e) command = \"plumed partial_tempering %g < %s > %s/scaled.top\"%(lmbd, toppp, folder) os.system(command) top = \"%s/scaled.top\"%folder tpr = \"%s/alaTB_ff03_tip3p_nvt.tpr\"%folder command = gmx + \" grompp -f %s -p %s -c %s -o %s\"%(mdp, top, gro, tpr) os.system(command) In your machine, gmx should point to a working MPI installation of Gromacs. This script will generate a number of folders, each with a scaled topology file scaled.top . Because we have generated the run input files for Gromacs, we can simply run the simulations using mpiexec -np 4 gmx_mpi mdrun -s alaTB_ff03_tip3p_nvt -multidir rep* -nsteps 1000000 -plumed ../plumed.dat -hrex -replex 500 -v Note that, as I did, you may need the flag --mca opal_cuda_support 1 for running the simulations with GPU support. You can next verify whether your replica scheme has been successful demuxing the trajectories using the demux.pl script included in the Gromacs distribution: perl demux.pl rep0/md.log Again, you will likely need to provide the full path to the Perl script. Additionally, you can analyze the results to verify how much additional sampling has been done through the replica exchange scheme. We have included a Python notebook with some rudimentary analysis.","title":"Multiple replicas"},{"location":"replicas/#working-with-multiple-replicas","text":"For a general description of how to integrate PLUMED when working with multiple replicas, check the PLUMED masterclass 21.5 . Note that for using this you will need both PLUMED and Gromacs to be installed using MPI. Check installation instructions here .","title":"Working with multiple replicas"},{"location":"replicas/#rest2","text":"Here we illustrate how to use the replica exchage with solute tempering method in its second iteration (i.e. REST2 ), which uses a different scaling scheme that the original REST . Before you start, please check the materials at the PLUMED website on Hamiltonian replica exchange . The first thing to check is whether all the files you will need are available in your folder. In the terminal, type $ ls data/replex You should find a number of files, including the Gromacs structure file alaTB_ff03_tip3p_npt.gro and another file called alaTB_ff03_tip3p_pp.top . The former is a properly equilibrated structure file, of the alanine dipeptide in TIP3P water. The latter is a pre-processed topology containing all the information required to run a simulation, and hence devoid of include directives . Note that in the [ atoms ] section, we have modified the standard names of the atoms with an underscore (i.e. HC is now HC_ ). This will mark these atoms as the solute whose interactions will be scaled in the REST2 scheme. The first step is hence to scale the interactions, and in order to do this, we use PLUMED `partial_tempering' program. Because we are running multiple replicas, each with a different scaling, the command will be run inside a loop in the following Python script: import sys, os gro = \"alaTB_ff03_tip3p_npt.gro\" toppp = \"alaTB_ff03_tip3p_pp.top\" mdp = \"sd_nvt.mdp\" # scale solute interactions nrep = 4 tmax = 1000 t0 = 300 for i in range(nrep): exponent = i/(nrep - 1) ti = t0*(tmax/t0)**exponent lmbd = t0/ti folder = \"rep%i\"%i try: os.mkdir(folder) except OSError as e: print (e) command = \"plumed partial_tempering %g < %s > %s/scaled.top\"%(lmbd, toppp, folder) os.system(command) top = \"%s/scaled.top\"%folder tpr = \"%s/alaTB_ff03_tip3p_nvt.tpr\"%folder command = gmx + \" grompp -f %s -p %s -c %s -o %s\"%(mdp, top, gro, tpr) os.system(command) In your machine, gmx should point to a working MPI installation of Gromacs. This script will generate a number of folders, each with a scaled topology file scaled.top . Because we have generated the run input files for Gromacs, we can simply run the simulations using mpiexec -np 4 gmx_mpi mdrun -s alaTB_ff03_tip3p_nvt -multidir rep* -nsteps 1000000 -plumed ../plumed.dat -hrex -replex 500 -v Note that, as I did, you may need the flag --mca opal_cuda_support 1 for running the simulations with GPU support. You can next verify whether your replica scheme has been successful demuxing the trajectories using the demux.pl script included in the Gromacs distribution: perl demux.pl rep0/md.log Again, you will likely need to provide the full path to the Perl script. Additionally, you can analyze the results to verify how much additional sampling has been done through the replica exchange scheme. We have included a Python notebook with some rudimentary analysis.","title":"REST2"},{"location":"setup/","text":"Setting up PLUMED with Gromacs First of all you must patch the Gromacs code with PLUMED. There are multiple options available for you. Installing with conda If you only want a hassle-free installation to try things out, then you can simply use the pre-compiled versions made available by the PLUMED developers for their masterclasses. Here I follow the instructions from the first masterclass In the terminal, type $ conda create --name plumed-masterclass and then $ conda activate plumed-masterclass $ conda install -c conda-forge plumed py-plumed numpy pandas matplotlib notebook mdtraj mdanalysis git $ conda install --strict-channel-priority -c plumed/label/masterclass -c conda-forge gromacs Patch Gromacs with PLUMED First install PLUMED. In order to do this you will have to download and unzip the software $ wget https://github.com/plumed/plumed2/archive/refs/heads/master.zip $ unzip master.zip $ cd plumed2-master/ Then you will follow the usual steps to install software using Make $ ./configure --prefix=/usr/local $ make $ sudo make install You will also have to make sure that the installed libraries are visible in your path export LD_LIBRARY_PATH=/lib:/usr/lib:/usr/local/lib The next step is to install Gromacs. First download the code and unzip the package. In this case I am patching the 2021.6 version of Gromacs. wget ftp://ftp.gromacs.org/gromacs/gromacs-2021.6.tar.gz tar -xvf gromacs-2021.6.tar.gz The next thing I did was to check whether I could use Cmake to install with my desired options, in this case running with GPUs and compiling with MPI support cd gromacs-2021.6/ mkdir build cd build/ cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_MPI=on -DGMX_GPU=CUDA -DCMAKE_INSTALL_PREFIX=/opt/gromacs/2021.6 make Then comes the key step of running the plumed patch command cd .. plumed patch -p After that you can go back to your Gromacs install directory and finish the installation as usual cd build/ cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_MPI=on -DGMX_GPU=CUDA -DCMAKE_INSTALL_PREFIX=/opt/gromacs/2021.6 make sudo make install /opt/gromacs/2021.6/bin/gmx_mpi mdrun -h","title":"Setup"},{"location":"setup/#setting-up-plumed-with-gromacs","text":"First of all you must patch the Gromacs code with PLUMED. There are multiple options available for you.","title":"Setting up PLUMED with Gromacs"},{"location":"setup/#installing-with-conda","text":"If you only want a hassle-free installation to try things out, then you can simply use the pre-compiled versions made available by the PLUMED developers for their masterclasses. Here I follow the instructions from the first masterclass In the terminal, type $ conda create --name plumed-masterclass and then $ conda activate plumed-masterclass $ conda install -c conda-forge plumed py-plumed numpy pandas matplotlib notebook mdtraj mdanalysis git $ conda install --strict-channel-priority -c plumed/label/masterclass -c conda-forge gromacs","title":"Installing with conda"},{"location":"setup/#patch-gromacs-with-plumed","text":"First install PLUMED. In order to do this you will have to download and unzip the software $ wget https://github.com/plumed/plumed2/archive/refs/heads/master.zip $ unzip master.zip $ cd plumed2-master/ Then you will follow the usual steps to install software using Make $ ./configure --prefix=/usr/local $ make $ sudo make install You will also have to make sure that the installed libraries are visible in your path export LD_LIBRARY_PATH=/lib:/usr/lib:/usr/local/lib The next step is to install Gromacs. First download the code and unzip the package. In this case I am patching the 2021.6 version of Gromacs. wget ftp://ftp.gromacs.org/gromacs/gromacs-2021.6.tar.gz tar -xvf gromacs-2021.6.tar.gz The next thing I did was to check whether I could use Cmake to install with my desired options, in this case running with GPUs and compiling with MPI support cd gromacs-2021.6/ mkdir build cd build/ cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_MPI=on -DGMX_GPU=CUDA -DCMAKE_INSTALL_PREFIX=/opt/gromacs/2021.6 make Then comes the key step of running the plumed patch command cd .. plumed patch -p After that you can go back to your Gromacs install directory and finish the installation as usual cd build/ cmake .. -DGMX_BUILD_OWN_FFTW=ON -DREGRESSIONTEST_DOWNLOAD=ON -DGMX_MPI=on -DGMX_GPU=CUDA -DCMAKE_INSTALL_PREFIX=/opt/gromacs/2021.6 make sudo make install /opt/gromacs/2021.6/bin/gmx_mpi mdrun -h","title":"Patch Gromacs with PLUMED"},{"location":"umbrella/","text":"Umbrella sampling with PLUMED Alanine dipeptide in vacuum First of all, we will download the data required to run the calculations, which should be in the data/umbrella folder of this repository. It should contain the following files from the 21.3 PLUMED masterclass . foo@bar:~$ ls data/umbrella/ ala_dipeptide_analysis.ipynb topolA.tpr wham.py reference.pdb topolB.tpr Using the typical Gromacs syntax, you could use these tpr files to run MD simulations using foo@bar:~$ gmx mdrun -s topolA.tpr -nsteps 200000 -x traj_unbiased.xtc This will result in an unbiased simulation trajectory of the alanine dipeptide. When we run simulations with the PLUMED we will use an additional flag -plumed that points to the unique input required to bias the simulations. Below, you can find an example input file to estimate the values of the \\( \\phi \\) and \\( \\psi \\) dihedrals and histogram their values. MOLINFO STRUCTURE=reference.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 # make histograms hhphi: HISTOGRAM ARG=phi STRIDE=100 GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 ffphi: CONVERT_TO_FES GRID=hhphi DUMPGRID GRID=ffphi FILE=fes_phi.dat STRIDE=200000 hhpsi: HISTOGRAM ARG=psi STRIDE=100 GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 ffpsi: CONVERT_TO_FES GRID=hhpsi DUMPGRID GRID=ffpsi FILE=fes_psi.dat STRIDE=200000 PRINT ARG=phi,psi FILE=colvar.dat STRIDE=100 Note that we are using a number of actions ( TORSION , HISTOGRAM and DUMPGRID , to name a few). You should familiarize yourself with their syntax . Introducing an umbrella bias in this script is very easy. It just requires adding an extra line to the script. For example, to restrain the simulation with a harmonic potential at 0 radians with a spring constant of 200, we would write # add restraining potential bb: RESTRAINT ARG=phi KAPPA=200.0 AT=0 Of course, we will be interested in running multiple window umbrella sampling, and hence the structure of our directory should change with as many plumed files as umbrellas windows. To generate 32 PLUMED input files, we can use the following Python script: at=np.linspace(-np.pi,np.pi,33)[:-1] print(at) for i in range(32): with open(\"plumed_\" + str(i) + \".dat\",\"w\") as f: print(\"\"\" # vim:ft=plumed MOLINFO STRUCTURE=reference.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 bb: RESTRAINT ARG=phi KAPPA=200.0 AT={} PRINT ARG=phi,psi,bb.bias FILE=colvar_multi_{}.dat STRIDE=100 \"\"\".format(at[i],i),file=f) And then we will run the 32 simulations foo@bar:~$ for i in $(seq 0 1 31) do gmx mdrun -plumed plumed_${i}.dat -s topolA.tpr -nsteps 200000 -x traj_comp_${i}.xtc done The result from this will be a dataset of 32 simulations, all initialized at the same conformation, but with umbrella potentials that will bias the sampling to different regions of conformational space. After completion of the runs, we must analyze the resulting trajectories. In order to do that we concatenate the trajectories foo@bar:~$ gmx trjcat -cat -f `for i in $(seq 0 1 31); do echo \"traj_comp_${i}.xtc\"; done` -o traj_multi_cat.xtc Then, we analyze them using plumed driver: foo@bar:~$ for i in $(seq 0 1 31) do plumed driver --plumed plumed_${i}.dat --ixtc traj_multi_cat.xtc --trajectory-stride 100 done To visualize the output of the simulations, we can read the colvar_multi_\\*.dat files and see what region of the Ramachandran map has been sampled by each of the runs. Using a Jupyter-notebook we can run the following import plumed import wham col=[] for i in range(32): col.append(plumed.read_as_pandas(\"colvar_multi_\" + str(i)+\".dat\")) bias = np.zeros((len(col[0][\"bb.bias\"]),32)) for i in range(32): bias[:,i] = col[i][\"bb.bias\"][-len(bias):] w = wham.wham(bias,T=kBT) colvar = col[0] colvar[\"logweights\"] = w[\"logW\"] plumed.write_pandas(colvar,\"bias_multi.dat\") We then write a file called plumed_multi.dat to obtain reweighted free energy surfaces. The file should look as follows: # vim:ft=plumed phi: READ FILE=bias_multi.dat VALUES=phi IGNORE_TIME psi: READ FILE=bias_multi.dat VALUES=psi IGNORE_TIME lw: READ FILE=bias_multi.dat VALUES=logweights IGNORE_TIME hhphi: HISTOGRAM ARG=phi GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 ffphi: CONVERT_TO_FES GRID=hhphi DUMPGRID GRID=ffphi FILE=fes_phi_cat.dat hhpsi: HISTOGRAM ARG=psi GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 ffpsi: CONVERT_TO_FES GRID=hhpsi DUMPGRID GRID=ffpsi FILE=fes_psi_cat.dat hhphir: HISTOGRAM ARG=phi GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 LOGWEIGHTS=lw ffphir: CONVERT_TO_FES GRID=hhphir DUMPGRID GRID=ffphir FILE=fes_phi_catr.dat hhpsir: HISTOGRAM ARG=psi GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 LOGWEIGHTS=lw ffpsir: CONVERT_TO_FES GRID=hhpsir DUMPGRID GRID=ffpsir FILE=fes_psi_catr.dat This script is again read by plumed driver, and from this program we get our outputs. foo@bar:~$ plumed driver --plumed plumed_multi.dat --ixtc traj_multi_cat.xtc --trajectory-stride 100 --kt 2.4943387854 In the folder where you have found the data to run these tests you can find a script that will let you plot the free energy landscapes as estimated from umbrella sampling and compare them with those from the equilibrium MD trajectory. Clearly, using umbrella sampling we have been able to cover much more ground for our reaction coordinates, while obtaining a very consistent PMF in the regions that actually matter. One of the assignments in the masterclass is to run from a different initial state, that corresponding to the topolB.tpr Gromacs input file. Running with multiple replicas Typically, if we want to do molecular simulations using umbrella sampling, we will take advantage of the parallel capabilities of modern computers. Additionally, we may able to swap conformations between umbrella windows in order to enhance the sampling. This may be helpful to avoid getting trapped in local minima. The specifics on how to do this using PLUMED in Gromacs are described in the first part of the PLUMED 21.5 masterclass . In order to take advantage of this, the software must have been compiled using MPI. The PLUMED input file for this type of calculation is only incrementally more complicated than the one used before MOLINFO STRUCTURE=../reference.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 bb: RESTRAINT ARG=phi KAPPA=200.0 AT=@replicas:-3.141,-2.945,-2.748,-2.552,-2.356,-2.159,-1.963,-1.767,-1.570,-1.374,-1.178,-0.981,-0.785,-0.589,-0.392,-0.196,0.0,0.196,0.392,0.589,0.785,0.981,1.178,1.374,1.570,1.767,1.963,2.159,2.356,2.552,2.748,2.945 PRINT ARG=phi,psi,bb.bias FILE=../colvar_multi.dat STRIDE=100 Note the AT=@replicas section in the RESTRAINT action. In this case, we must generate as many directories (here termed run0 , run1 , and so on), each of which will contain the tpr file. Simulations are then run using foo@bar:~$ mpiexec -np 32 --oversubscribe gmx_mpi mdrun -multidir run* -plumed ../plumed.dat -s topolA.tpr -replex 200 -nsteps 200000 While much of the analysis will remain unchanged from the case of running each umbrella window independently, it is important to carefully check in the demuxed trajectories whether the different copies of the system diffuse across windows. Some copies may remain isolated from the rest, which may have catastrophic results.","title":"Umbrella sampling"},{"location":"umbrella/#umbrella-sampling-with-plumed","text":"","title":"Umbrella sampling with PLUMED"},{"location":"umbrella/#alanine-dipeptide-in-vacuum","text":"First of all, we will download the data required to run the calculations, which should be in the data/umbrella folder of this repository. It should contain the following files from the 21.3 PLUMED masterclass . foo@bar:~$ ls data/umbrella/ ala_dipeptide_analysis.ipynb topolA.tpr wham.py reference.pdb topolB.tpr Using the typical Gromacs syntax, you could use these tpr files to run MD simulations using foo@bar:~$ gmx mdrun -s topolA.tpr -nsteps 200000 -x traj_unbiased.xtc This will result in an unbiased simulation trajectory of the alanine dipeptide. When we run simulations with the PLUMED we will use an additional flag -plumed that points to the unique input required to bias the simulations. Below, you can find an example input file to estimate the values of the \\( \\phi \\) and \\( \\psi \\) dihedrals and histogram their values. MOLINFO STRUCTURE=reference.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 # make histograms hhphi: HISTOGRAM ARG=phi STRIDE=100 GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 ffphi: CONVERT_TO_FES GRID=hhphi DUMPGRID GRID=ffphi FILE=fes_phi.dat STRIDE=200000 hhpsi: HISTOGRAM ARG=psi STRIDE=100 GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 ffpsi: CONVERT_TO_FES GRID=hhpsi DUMPGRID GRID=ffpsi FILE=fes_psi.dat STRIDE=200000 PRINT ARG=phi,psi FILE=colvar.dat STRIDE=100 Note that we are using a number of actions ( TORSION , HISTOGRAM and DUMPGRID , to name a few). You should familiarize yourself with their syntax . Introducing an umbrella bias in this script is very easy. It just requires adding an extra line to the script. For example, to restrain the simulation with a harmonic potential at 0 radians with a spring constant of 200, we would write # add restraining potential bb: RESTRAINT ARG=phi KAPPA=200.0 AT=0 Of course, we will be interested in running multiple window umbrella sampling, and hence the structure of our directory should change with as many plumed files as umbrellas windows. To generate 32 PLUMED input files, we can use the following Python script: at=np.linspace(-np.pi,np.pi,33)[:-1] print(at) for i in range(32): with open(\"plumed_\" + str(i) + \".dat\",\"w\") as f: print(\"\"\" # vim:ft=plumed MOLINFO STRUCTURE=reference.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 bb: RESTRAINT ARG=phi KAPPA=200.0 AT={} PRINT ARG=phi,psi,bb.bias FILE=colvar_multi_{}.dat STRIDE=100 \"\"\".format(at[i],i),file=f) And then we will run the 32 simulations foo@bar:~$ for i in $(seq 0 1 31) do gmx mdrun -plumed plumed_${i}.dat -s topolA.tpr -nsteps 200000 -x traj_comp_${i}.xtc done The result from this will be a dataset of 32 simulations, all initialized at the same conformation, but with umbrella potentials that will bias the sampling to different regions of conformational space. After completion of the runs, we must analyze the resulting trajectories. In order to do that we concatenate the trajectories foo@bar:~$ gmx trjcat -cat -f `for i in $(seq 0 1 31); do echo \"traj_comp_${i}.xtc\"; done` -o traj_multi_cat.xtc Then, we analyze them using plumed driver: foo@bar:~$ for i in $(seq 0 1 31) do plumed driver --plumed plumed_${i}.dat --ixtc traj_multi_cat.xtc --trajectory-stride 100 done To visualize the output of the simulations, we can read the colvar_multi_\\*.dat files and see what region of the Ramachandran map has been sampled by each of the runs. Using a Jupyter-notebook we can run the following import plumed import wham col=[] for i in range(32): col.append(plumed.read_as_pandas(\"colvar_multi_\" + str(i)+\".dat\")) bias = np.zeros((len(col[0][\"bb.bias\"]),32)) for i in range(32): bias[:,i] = col[i][\"bb.bias\"][-len(bias):] w = wham.wham(bias,T=kBT) colvar = col[0] colvar[\"logweights\"] = w[\"logW\"] plumed.write_pandas(colvar,\"bias_multi.dat\") We then write a file called plumed_multi.dat to obtain reweighted free energy surfaces. The file should look as follows: # vim:ft=plumed phi: READ FILE=bias_multi.dat VALUES=phi IGNORE_TIME psi: READ FILE=bias_multi.dat VALUES=psi IGNORE_TIME lw: READ FILE=bias_multi.dat VALUES=logweights IGNORE_TIME hhphi: HISTOGRAM ARG=phi GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 ffphi: CONVERT_TO_FES GRID=hhphi DUMPGRID GRID=ffphi FILE=fes_phi_cat.dat hhpsi: HISTOGRAM ARG=psi GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 ffpsi: CONVERT_TO_FES GRID=hhpsi DUMPGRID GRID=ffpsi FILE=fes_psi_cat.dat hhphir: HISTOGRAM ARG=phi GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 LOGWEIGHTS=lw ffphir: CONVERT_TO_FES GRID=hhphir DUMPGRID GRID=ffphir FILE=fes_phi_catr.dat hhpsir: HISTOGRAM ARG=psi GRID_MIN=-pi GRID_MAX=pi GRID_BIN=600 BANDWIDTH=0.05 LOGWEIGHTS=lw ffpsir: CONVERT_TO_FES GRID=hhpsir DUMPGRID GRID=ffpsir FILE=fes_psi_catr.dat This script is again read by plumed driver, and from this program we get our outputs. foo@bar:~$ plumed driver --plumed plumed_multi.dat --ixtc traj_multi_cat.xtc --trajectory-stride 100 --kt 2.4943387854 In the folder where you have found the data to run these tests you can find a script that will let you plot the free energy landscapes as estimated from umbrella sampling and compare them with those from the equilibrium MD trajectory. Clearly, using umbrella sampling we have been able to cover much more ground for our reaction coordinates, while obtaining a very consistent PMF in the regions that actually matter. One of the assignments in the masterclass is to run from a different initial state, that corresponding to the topolB.tpr Gromacs input file.","title":"Alanine dipeptide in vacuum"},{"location":"umbrella/#running-with-multiple-replicas","text":"Typically, if we want to do molecular simulations using umbrella sampling, we will take advantage of the parallel capabilities of modern computers. Additionally, we may able to swap conformations between umbrella windows in order to enhance the sampling. This may be helpful to avoid getting trapped in local minima. The specifics on how to do this using PLUMED in Gromacs are described in the first part of the PLUMED 21.5 masterclass . In order to take advantage of this, the software must have been compiled using MPI. The PLUMED input file for this type of calculation is only incrementally more complicated than the one used before MOLINFO STRUCTURE=../reference.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 bb: RESTRAINT ARG=phi KAPPA=200.0 AT=@replicas:-3.141,-2.945,-2.748,-2.552,-2.356,-2.159,-1.963,-1.767,-1.570,-1.374,-1.178,-0.981,-0.785,-0.589,-0.392,-0.196,0.0,0.196,0.392,0.589,0.785,0.981,1.178,1.374,1.570,1.767,1.963,2.159,2.356,2.552,2.748,2.945 PRINT ARG=phi,psi,bb.bias FILE=../colvar_multi.dat STRIDE=100 Note the AT=@replicas section in the RESTRAINT action. In this case, we must generate as many directories (here termed run0 , run1 , and so on), each of which will contain the tpr file. Simulations are then run using foo@bar:~$ mpiexec -np 32 --oversubscribe gmx_mpi mdrun -multidir run* -plumed ../plumed.dat -s topolA.tpr -replex 200 -nsteps 200000 While much of the analysis will remain unchanged from the case of running each umbrella window independently, it is important to carefully check in the demuxed trajectories whether the different copies of the system diffuse across windows. Some copies may remain isolated from the rest, which may have catastrophic results.","title":"Running with multiple replicas"},{"location":"metadynamics/metadynamics-with-Q/","text":"\\usepackage{amsmath} Defining native contacts (Q) as collective variable In order to understand how to implement Q as a collective variable in PLUMED one needs to understand how Q is defined: $$Q(X) = \\frac{1}{N}\\sum_{\\left(i,i\\right)}\\frac{1}{1+exp\\left[\\beta \\left(r_{ij}(X)-\\lambda r^{0}_{ij}\\right)\\right]}$$ where the sum runs over the N pairs of native contacts (i,j) , r $ _{ij} $ (X) is the distance between i and j in configuration X , $r^{0} {ij}$ is the distance between _i and j in the native state, $\\beta$ is a smoothing parameter taken to be 5 ${AA}^{-1}$ and the factor $\\lambda$ accounts for fluctuations when the contact is formed, taken to be 1.2.","title":"Defining Q as collective variable"},{"location":"metadynamics/metadynamics-with-Q/#defining-native-contacts-q-as-collective-variable","text":"In order to understand how to implement Q as a collective variable in PLUMED one needs to understand how Q is defined: $$Q(X) = \\frac{1}{N}\\sum_{\\left(i,i\\right)}\\frac{1}{1+exp\\left[\\beta \\left(r_{ij}(X)-\\lambda r^{0}_{ij}\\right)\\right]}$$ where the sum runs over the N pairs of native contacts (i,j) , r $ _{ij} $ (X) is the distance between i and j in configuration X , $r^{0} {ij}$ is the distance between _i and j in the native state, $\\beta$ is a smoothing parameter taken to be 5 ${AA}^{-1}$ and the factor $\\lambda$ accounts for fluctuations when the contact is formed, taken to be 1.2.","title":"Defining native contacts (Q) as collective variable"},{"location":"metadynamics/metadynamics/","text":"Metadynamics simulations using PLUMED One of the key functionalities from PLUMED is the possibility of running simulations using metadynamics. The contents from this section are borrowed from masterclass 21.4 , where the methodology is introduced in simple terms. Alanine dipeptide in vacuum To run this example, you will need the input files in the data folder from this repository. Then, write a plumed.dat file with the following contents MOLINFO STRUCTURE=dialaA.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 metad: METAD ARG=phi ... PACE=500 HEIGHT=1.2 BIASFACTOR=8 SIGMA=0.3 FILE=HILLS GRID_MIN=-pi GRID_MAX=pi ... PRINT ARG=phi,psi FILE=colvarA.dat STRIDE=10 The key difference with umbrella sampling is that we are biasing with the METAD action. This allows us to explore the whole Ramachandran map in a single run. Again, we can simply run our simulation using the usual Gromacs command with an additional -plumed flag foo@bar:~$ gmx mdrun -s topolA.tpr --deffnm metadA -plumed plumed.dat -nsteps 10000000 The most important differences come in the analysis. The simulation has generated a file HILLS including the information corresponding to the adaptive biases added during the simulation. We can estimate the free energy surface on the biasing coordinate using the sum_hills command foo@bar:~$ plumed sum_hills --hills HILLS --stride 500 --mintozero This results in a series of files winth free energy surfaces on the collective variable with increasingly large amounts of data. Hopefully, this will show that the free energy estimation is well converged. Additionally, we can use the driver , for more analysis. For example, we can write the following script to reweight the simulation data and obtain the energy surface not only on the biased coordinate but also on other coordinates (in this case, the psi torsion angle) MOLINFO STRUCTURE=dialaA.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 metad: METAD ARG=phi ... PACE=500 HEIGHT=1.2 BIASFACTOR=8 SIGMA=0.3 FILE=HILLS GRID_MIN=-pi GRID_MAX=pi RESTART=YES ... bias: REWEIGHT_BIAS ARG=metad.bias hhphi: HISTOGRAM ARG=phi STRIDE=50 GRID_MIN=-pi GRID_MAX=pi GRID_BIN=50 BANDWIDTH=0.05 LOGWEIGHTS=bias hhpsi: HISTOGRAM ARG=psi STRIDE=50 GRID_MIN=-pi GRID_MAX=pi GRID_BIN=50 BANDWIDTH=0.05 LOGWEIGHTS=bias ffphi: CONVERT_TO_FES GRID=hhphi ffpsi: CONVERT_TO_FES GRID=hhpsi DUMPGRID GRID=ffphi FILE=ffphi.dat DUMPGRID GRID=ffpsi FILE=ffpsi.dat PRINT ARG=phi,psi,metad.bias FILE=colvar_reweight.dat STRIDE=1 Then, on the command line, we run foo@bar:~$ plumed driver --mf_xtc metadA.xtc --plumed plumed_reweight.dat --kt 2.494339 The results are output to the ffphi.dat and ffpsi.dat files, the latter containing a notably noisy free energy landscape. We may want to overcome this problem adding a bias in two independent coordinates. MOLINFO STRUCTURE=dialaA.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 metad: METAD ARG=phi,psi ... PACE=500 HEIGHT=1.2 BIASFACTOR=8 SIGMA=0.3,0.3 FILE=HILLS2d GRID_MIN=-pi,-pi GRID_MAX=pi,pi ... PRINT ARG=phi,psi FILE=colvarA2d.dat STRIDE=10 Again, the HILLS file can be analyzed using plumed sum_hills to produce a smooth two dimensional free energy landscape.","title":"Alanine dipeptide in vacuum"},{"location":"metadynamics/metadynamics/#metadynamics-simulations-using-plumed","text":"One of the key functionalities from PLUMED is the possibility of running simulations using metadynamics. The contents from this section are borrowed from masterclass 21.4 , where the methodology is introduced in simple terms.","title":"Metadynamics simulations using PLUMED"},{"location":"metadynamics/metadynamics/#alanine-dipeptide-in-vacuum","text":"To run this example, you will need the input files in the data folder from this repository. Then, write a plumed.dat file with the following contents MOLINFO STRUCTURE=dialaA.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 metad: METAD ARG=phi ... PACE=500 HEIGHT=1.2 BIASFACTOR=8 SIGMA=0.3 FILE=HILLS GRID_MIN=-pi GRID_MAX=pi ... PRINT ARG=phi,psi FILE=colvarA.dat STRIDE=10 The key difference with umbrella sampling is that we are biasing with the METAD action. This allows us to explore the whole Ramachandran map in a single run. Again, we can simply run our simulation using the usual Gromacs command with an additional -plumed flag foo@bar:~$ gmx mdrun -s topolA.tpr --deffnm metadA -plumed plumed.dat -nsteps 10000000 The most important differences come in the analysis. The simulation has generated a file HILLS including the information corresponding to the adaptive biases added during the simulation. We can estimate the free energy surface on the biasing coordinate using the sum_hills command foo@bar:~$ plumed sum_hills --hills HILLS --stride 500 --mintozero This results in a series of files winth free energy surfaces on the collective variable with increasingly large amounts of data. Hopefully, this will show that the free energy estimation is well converged. Additionally, we can use the driver , for more analysis. For example, we can write the following script to reweight the simulation data and obtain the energy surface not only on the biased coordinate but also on other coordinates (in this case, the psi torsion angle) MOLINFO STRUCTURE=dialaA.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 metad: METAD ARG=phi ... PACE=500 HEIGHT=1.2 BIASFACTOR=8 SIGMA=0.3 FILE=HILLS GRID_MIN=-pi GRID_MAX=pi RESTART=YES ... bias: REWEIGHT_BIAS ARG=metad.bias hhphi: HISTOGRAM ARG=phi STRIDE=50 GRID_MIN=-pi GRID_MAX=pi GRID_BIN=50 BANDWIDTH=0.05 LOGWEIGHTS=bias hhpsi: HISTOGRAM ARG=psi STRIDE=50 GRID_MIN=-pi GRID_MAX=pi GRID_BIN=50 BANDWIDTH=0.05 LOGWEIGHTS=bias ffphi: CONVERT_TO_FES GRID=hhphi ffpsi: CONVERT_TO_FES GRID=hhpsi DUMPGRID GRID=ffphi FILE=ffphi.dat DUMPGRID GRID=ffpsi FILE=ffpsi.dat PRINT ARG=phi,psi,metad.bias FILE=colvar_reweight.dat STRIDE=1 Then, on the command line, we run foo@bar:~$ plumed driver --mf_xtc metadA.xtc --plumed plumed_reweight.dat --kt 2.494339 The results are output to the ffphi.dat and ffpsi.dat files, the latter containing a notably noisy free energy landscape. We may want to overcome this problem adding a bias in two independent coordinates. MOLINFO STRUCTURE=dialaA.pdb phi: TORSION ATOMS=@phi-2 psi: TORSION ATOMS=@psi-2 metad: METAD ARG=phi,psi ... PACE=500 HEIGHT=1.2 BIASFACTOR=8 SIGMA=0.3,0.3 FILE=HILLS2d GRID_MIN=-pi,-pi GRID_MAX=pi,pi ... PRINT ARG=phi,psi FILE=colvarA2d.dat STRIDE=10 Again, the HILLS file can be analyzed using plumed sum_hills to produce a smooth two dimensional free energy landscape.","title":"Alanine dipeptide in vacuum"}]}
\ No newline at end of file