Why `torch.is_floating_point` and `torch.is_complex` but `is_integer`?

PyTorch currently provides torch.is_floating_point and torch.is_complex to check whether a tensor contains data of a certain numeric types. Why hasn’t an equivalent method for integers been implemented yet?

My guess is that it isn’t as widely requested as the other two, but maybe there’s a more profound reason. I’m considering opening an issue on GitHub for creating a feature request

Creating a feature request sounds good.
Maybe you could check, if your current tensor is not a floating point tensor to get an idea, if it’s an integer tensor?

Thanks for you answer. I’ve created a feature request, let’s see what comes out of it.

1 Like