On concatenating a
torch.long tensor with a
torch.float I observe some numerical errors.
a = torch.tensor([-123456789]) # dtype is torch.int64
b = torch.ones(1,) # dtype is torch.float32
c = torch.cat([a, b]) # dtype is torch.float32
print(c.item()) # results in -123456792
That’s expected due to the limited numerical precision.
Take a look at Single-precision floating-point format - Wikipedia and the precision limits.
Your value lies between
[-2**27, -2**26] and thus rounds to multiples of
I see, thanks. Is there a way to show a warning in such cases?
Rounding is everywhere, so a general warning won’t be useful. I thus assume you are interested in a warning if an integer value is transformed into a floating point one and cannot be represented?
If so, feel free to create a feature request on GitHub so that the code owners can discuss it.