a = torch.arange(0, 0.5, 1, dtype=torch.int64)
a
tensor(, dtype=torch.int64)
a = torch.arange(0, 0.5, 1, dtype=torch.int32)
a
tensor([0], dtype=torch.int32)
Why is it that size of ‘a’ is 0 when dtype is int64 where as it is 1 for int32? Logically speaking the first element is anyways 0 and size should have been 1 even for int64 type, isn’t it?
I see this same behavior (on both the cpu and gpu) on pytorch 2.6.0.
I agree with you that the int32 behavior is correct and the int64 behavior is wrong.
This looks like a bug to me. Perhaps you could file this as a github issue and see what
the experts think.