You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
$ python test_model.py
Traceback (most recent call last):
File "/home/raul/work/bcn/torchmd-net/tests/test_model.py", line 266, in<module>
ddy = torch.autograd.grad(dy, input, grad_outputs=torch.ones_like(dy))[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raul/miniforge3/envs/torchnightly/lib/python3.11/site-packages/torch/autograd/__init__.py", line 412, in grad
result = _engine_run_backward(
^^^^^^^^^^^^^^^^^^^^^
File "/home/raul/miniforge3/envs/torchnightly/lib/python3.11/site-packages/torch/autograd/graph.py", line 744, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/raul/miniforge3/envs/torchnightly/lib/python3.11/site-packages/torch/autograd/function.py", line 301, in apply
return user_fn(self, *args)
^^^^^^^^^^^^^^^^^^^^
File "/home/raul/miniforge3/envs/torchnightly/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 877, in backward
raise RuntimeError(
RuntimeError: torch.compile with aot_autograd does not currently support double backward
Even when using the latest pytorch nightly: pytorch 2.3.0.dev20240313 py3.11_cpu_0 pytorch-nightly
Currently it is not possible to run backwards twice with torch.compile. For instance, this code fails:
With this error
Even when using the latest pytorch nightly:
pytorch 2.3.0.dev20240313 py3.11_cpu_0 pytorch-nightly
This is a well known limitation of compile pytorch/pytorch#91469
TorchMD-Net uses backpropagation to compute forces from energies, which means that double backpropagation is required to train with forces.
Thus, one cannot currently do this, as it will trigger the same error as above:
I am opening this issue to keep track of the feature.
The text was updated successfully, but these errors were encountered: