-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Metadynamics issue with OpenMM-PLUMED #78
Comments
I don't know if it is related to the error, but those instructions may be outdated. There is a conda package now. |
I tried conda-forge first but got the error message: "PackagesNotFoundError: The following packages are not available from current channels" |
We now have conda packages for ARM Mac. The PR was just merged a couple of days ago. I'm not really sure what's going on with the original questions. It sounds like possibly more of a PLUMED issue than an OpenMM issue? |
Thanks! I'll install the plugin with conda-forge to see if the problem can be solved. I also run the example below (it's supposed to be a working example) & again without "upper wall" the simulation runs fine and with "upper wall" it blows up with NaN value & here is the example's plumed-metal.py script with some minor changes ("upper wall added"): I continued running the example and tried to debug with following some of the suggested solutions: decreased the time step, I removed constraints (heavyatom-H), I removed Barostat and run the simulation as NVT but none helped. I printed per group energy at the beginning and the time simulations stops: MINIMIZATION START SIMULATION RUN (START) SIMULATION RUN (END) I also looked at per atom forces. There are some atoms with large forces (I am lost in the sense that how large it should be to cause a NaN issue). Is there a way to print per atom forces only for those two atoms in the script? I really appreciate any suggestion/help so I can pass this step and run the simulations. |
Is this the feature you're talking about? I'm not a PLUMED user. If it works without the upper wall but fails with it, there's a good chance it means the restraining force used to implement the wall is too large. In a typical simulation, most forces are below about 5000 kJ/mol/nm. You have forces four times that large, which could easily cause it to fail. What about reducing kappa to make the restraining force softer? Whatever value you have right now, try dividing it by 10. Does that bring the forces into a more reasonable range? Does it keep the simulation in the intended area without causing it to fail? |
Yes, that is the feature. Is there any way to select a reasonable value for kappa without doing trial and errors? again, thanks much for your suggestions/help. |
Maybe try reducing EXP to soften it? I'm just guessing from looking at the documentation. I'm not a PLUMED user. I don't know how it's usually chosen. I notice that example script uses LangevinIntegrator, which is obsolete. If that's what you're using, change it to LangevinMiddleIntegrator. It's more stable. |
Thanks! I'll follow your suggestions. |
Ok. Let us know what you find. |
Hi @peastman @GiovanniBussi Thanks much! |
Hi @peastman @GiovanniBussi @Lili-Summer I would also add that this behavior happens instantly, before Metad even manages to accumulate any sizable bias on CVs. 1 ps is quite enough for it. My Metad conditions are pretty mild:
Please tell me if any files from my side can be helpful. Otherwise I strongly suggest that some Ala-Ala test runs should be performed. Thanks! |
Just checked with OpenMM7.7.0 - Plumed2.7.3 - Plugin v1 Best, |
Hi, As Leila mentioned I have run into the (most likely) same problem. Here are the results of my analysis. I ran 6 jobs with the same system and 3 CVs. The CVs are center-of-mass distances between groups of CA atoms. I ran 3 jobs using openmm 7.7, plugin 1.0, and plumed 2.8.1, and another 3 jobs using openmm 8.1.1, plugin 2.0, and plumed 2.9.0. I used 3 different biasing methods. One set of jobs is pure metadynamics (METAD), another set is pure eabf (DRR), and the 3rd set is meta-eabf (DRR on the real CVs and METAD on the fictitious CVs). I attached 3 sets of plots, in each case the top represents plumed v2.9.0 and the bottom v2.8.1. The first is meta, the 2nd is eabf, and the 3rd is meta-eabf. I plotted data at every single time step until the NaNs appeared and eventually the jobs (at the top) crashed. I can also reconfirm that the crashes happened very fast, only after a few hundred time steps. As you can see, something weird is going on. In the DRR jobs the COM distances computed on the real CV (purple) show some kind of resonance catastrophe until the difference between real and fictitious (green) CVs becomes too big and the force blows up the system. The plain METAD job is different, there are only real CVs, but as you can see, COM-dist1 does a huge jump out of nowhere. Lastly, I also ran these exact same jobs using openmm 7.7, plugin 1.0, and plumed 2.9.0, but didn't see any weird behavior. So, it seems that the problem might be with plugin 2.0 indeed. Thanks for looking into this! Best regards, Istvan |
Just a short thanks to the people who found the error source! I had the same issue and could solve it by downgrading openmm-plumed. |
I also wanted to mention that the issue shows up on both M2 and Ubuntu Linux. |
Just wanted to report the same issue i.e, when using plumed for WT-metadynamics, the system starts blowing up immediately after the simulation begins and crashes after ~1 ns. If I downgrade the plugin to v1.0, no crash/bowups. |
I've been trying to debug this issue using the example provided by @Lili-Summer. I just can't find anything that OpenMM is doing wrong. PLUMED really is returning huge forces. And if I remove the Is it possible the problem is caused by using the newer version of PLUMED, not the newer version of OpenMM? |
Oh, I found it. After all my searching, it turned out to be something really simple: not clearing a vector before calling PLUMED. The fix is in #82. |
New conda packages are now up. |
Thank you, Peter!
István
…On Tue, Apr 2, 2024 at 11:05 AM Peter Eastman ***@***.***> wrote:
New conda packages are now up.
—
Reply to this email directly, view it on GitHub
<#78 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AKDNJ2SAS2LDW3TSKHWCVU3Y3LCLBAVCNFSM6AAAAABCSBP6N2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZSGMYDGNZTGM>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Hello,
I am trying to run MetaD simulation (on Mac M2) with OpenMM-PLUMED. However, despite many efforts I have not been able to run the simulation due to one of the following errors:
1- Looking for a value outside the grid along the 0 dimension
When I check the output file I see at some point CV gets larger than the uwall limit and if I plot "PES vs time" I see sudden spike around the time CV goes out of range.
2- If I increase the GRID_MAX(as one of the suggested solution), then I get another error: open.OpenMMException: Particle coordinate is NaN.
I visually inspected the trajectory (*.dcd) and CB-H bond (ASP137) suddenly behaves poor. I have heavyatom-H constraints. I removed all the constraints and decreased the time step (2 fs to 0.5 fs) but none helped. It seems something fundamental gets broken but I cannot find the source. The tar.gz file is attached to this message
with all necessary files to reproduce the error (run as: nohup ./metaD.sh)
metaD.tar.gz
Versions:
openmm=8.1.1=py311h8ced375_0_khronos
plumed=2.9.0=nompi_haf67379_100
openmmplumed==2.0
I followed this link to install openmm-plumed:
https://github.com/giorginolab/miniomm/wiki/4.-Installing-OpenMM-Plumed-plugin
Thanks in advance for any suggestions/advice!!
The text was updated successfully, but these errors were encountered: