You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "main.py", line 116, in
main(config)
File "main.py", line 107, in main
solver.train()
File "/pubdata/lyz/SMY/TaylorSENet-main/solver.py", line 107, in train
train_avg_loss = self._run_one_epoch(epoch, val_opt=False)
File "/pubdata/lyz/SMY/TaylorSENet-main/solver.py", line 381, in _run_one_epoch
loss_dict = self._train_batch(batch_info)
File "/pubdata/lyz/SMY/TaylorSENet-main/solver.py", line 368, in train_batch
self.update_params(batch_loss)
File "/pubdata/lyz/SMY/TaylorSENet-main/solver.py", line 424, in update_params
self.optimizer.step()
File "/home/linyz/anaconda3/envs/py38/lib/python3.8/site-packages/torch/optim/optimizer.py", line 88, in wrapper
return func(*args, **kwargs)
File "/home/linyz/anaconda3/envs/py38/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/home/linyz/anaconda3/envs/py38/lib/python3.8/site-packages/torch/optim/adam.py", line 107, in step
F.adam(params_with_grad,
File "/home/linyz/anaconda3/envs/py38/lib/python3.8/site-packages/torch/optim/functional.py", line 86, in adam
exp_avg.mul(beta1).add(grad, alpha=1 - beta1)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
The text was updated successfully, but these errors were encountered:
检查了很久不知道哪一步数没放到GPU设备上,没改动过代码,求助
Traceback (most recent call last):
File "main.py", line 116, in
main(config)
File "main.py", line 107, in main
solver.train()
File "/pubdata/lyz/SMY/TaylorSENet-main/solver.py", line 107, in train
train_avg_loss = self._run_one_epoch(epoch, val_opt=False)
File "/pubdata/lyz/SMY/TaylorSENet-main/solver.py", line 381, in _run_one_epoch
loss_dict = self._train_batch(batch_info)
File "/pubdata/lyz/SMY/TaylorSENet-main/solver.py", line 368, in train_batch
self.update_params(batch_loss)
File "/pubdata/lyz/SMY/TaylorSENet-main/solver.py", line 424, in update_params
self.optimizer.step()
File "/home/linyz/anaconda3/envs/py38/lib/python3.8/site-packages/torch/optim/optimizer.py", line 88, in wrapper
return func(*args, **kwargs)
File "/home/linyz/anaconda3/envs/py38/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/home/linyz/anaconda3/envs/py38/lib/python3.8/site-packages/torch/optim/adam.py", line 107, in step
F.adam(params_with_grad,
File "/home/linyz/anaconda3/envs/py38/lib/python3.8/site-packages/torch/optim/functional.py", line 86, in adam
exp_avg.mul(beta1).add(grad, alpha=1 - beta1)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
The text was updated successfully, but these errors were encountered: