Hi there,
I am trying to get the torch.floor
function return the “largest integer less than or equal to each elements of input
” as described in the documentation. However, in practice the floor function seems to return the largest integer valued float that is less than or equal to each element of the input tensor.
See:
>>> a = torch.randn((100,100)) * 10;
>>> torch.floor(a).dtype
torch.float32
A quick search showed that there has been an open issue https://github.com/pytorch/pytorch/issues/36309 about this which seems to have been “resolved” by introducing an optional out
argument.
However, this also doesn’t seem to work:
>>> a = torch.randn((100,100)) * 10;
>>> b = torch.zeros((100,100), dtype=torch.int);
>>> torch.floor(a, out = b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: Found dtype Int but expected Float
The only way I found to get the “largest integer less than or equal to the input” is by explicitly casting the returned float to an int. This is of course rather stupid and introduces heavy overhead that I try to prevent.
Alternatively, one can simply directly cast a
to a tensor of int
values, which works if all elements in a
are larger than zero. But fails in general.
Is there a better way to do that?