I wanted to do a check whether a tensor is a floating point (torch.half, torch.float, or torch.double) and noticed there is a method torch.Tensor.is_floating_point()
and a property torch.Tensor.dtype.is_floating_point
. Is one of these preferred to the other? Also is there are reason these are not documented?
This is my first time hearing about both of them. I’ve filed an issue at https://github.com/pytorch/pytorch/issues/15700 to get them documented.
Looking at the implementations, my guess is that torch.Tensor.dtype.is_floating_point
might be faster than torch.Tensor.is_floating_point()
but otherwise I don’t see a difference
There are three ways of checking is_floating_point
. They have been listed below in the increasing order of time taken to return
torch.Tensor.dtype.is_floating_point

torch.Tensor.is_floating_point()
(2x slower than the fastest) 
torch.is_floating_point()
(3x slower than the fastest)
1 Like