Problems with target arrays of int (int32) types in loss functions

it’s on our list of things to do to allow Int labels as well, but right now it is expected behavior to ask for LongTensors as labels.

We use Long labels because some of the use-cases we had in Torch had nClasses that didn’t fit the Int precision limits.
Since we use the same C backend in PyTorch, we went with long labels.

We DO not recommend double for performance, especially on the GPU. GPUs have bad double precision perf and are optimized for float32 performance.

(PS: Is there a tensor attribute to return the type, e.g., sth like NumPys my_array.dtype?)

You can simply get the class name:

x = torch.randn(10)
print(x.__class__)
1 Like