Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error: AttributeError: module 'torch.cuda' has no attribute 'comm' #43

Open
siva-wellnesys opened this issue Dec 11, 2020 · 5 comments
Open

Comments

@siva-wellnesys
Copy link

siva-wellnesys commented Dec 11, 2020

hi i am getting this error , while running the demo.ipynb in google colab with my image. can you help me for this.

@GeLee-Q
Copy link

GeLee-Q commented Dec 22, 2020

I meet the same problem, Do you solve it ?

@hustxxj
Copy link

hustxxj commented Dec 31, 2020

I meet the same problem, Do you solve it ?

add "import torch.cuda.comm" in file lib/core/integral_loss.py

@HyunJai
Copy link

HyunJai commented Jan 20, 2021

I meet the same problem, Do you solve it ?

add "import torch.cuda.comm" in file lib/core/integral_loss.py

I add "import torch.cuda.comm"

But, It didn't working in file lib/core/integral_loss.py

@mkocabas
Copy link
Owner

Which pytorch version are you using? It should work with the latest v1.7.

@HyunJai
Copy link

HyunJai commented Jan 22, 2021

In file lib/core/integral_loss.py

Chage below parts and then it will be working

accu_x = accu_x * torch.cuda.comm.broadcast(torch.arange(float(x_dim)), devices=[accu_x.device.index])[0]
accu_y = accu_y * torch.cuda.comm.broadcast(torch.arange(float(y_dim)), devices=[accu_y.device.index])[0]
accu_z = accu_z * torch.cuda.comm.broadcast(torch.arange(float(z_dim)), devices=[accu_z.device.index])[0]
device = torch.device('cuda')

accu_x = accu_x * torch.arange(float(x_dim)).to(device)
accu_y = accu_y * torch.arange(float(y_dim)).to(device)
accu_z = accu_z * torch.arange(float(z_dim)).to(device)

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants