Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0-dim named tensors crash on _force_order #93

Open
gtanzer opened this issue Mar 6, 2019 · 0 comments
Open

0-dim named tensors crash on _force_order #93

gtanzer opened this issue Mar 6, 2019 · 0 comments

Comments

@gtanzer
Copy link
Contributor

gtanzer commented Mar 6, 2019

Scalar named tensors (without named dimensions), like those returned from loss functions, crash when _force_order is called internally. A minimal example is:

a = NamedTensor(torch.tensor(0.0), ())
b = NamedTensor(torch.tensor(1.0), ())
a+b

The relevant part of the trace is:

...
/usr/local/lib/python3.6/dist-packages/namedtensor/core.py in _force_order(self, names)
    195                 trans.append(d)
    196         return self.__class__(
--> 197             self.transpose(*trans)._tensor.contiguous().view(*view), ex
    198         )
    199 

/usr/local/lib/python3.6/dist-packages/namedtensor/core.py in transpose(self, *dims)
    132         )
    133         indices = [self._schema.get(d) for d in to_dims]
--> 134         tensor = self._tensor.permute(*indices)
    135         return self.__class__(tensor, to_dims)
    136 

TypeError: permute() missing 1 required positional arguments: "dims"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant