On concatenating a torch.long tensor with a torch.float I observe some numerical errors.

a = torch.tensor([-123456789]) # dtype is torch.int64
b = torch.ones(1,) # dtype is torch.float32
c = torch.cat([a, b]) # dtype is torch.float32
print(c[0].item()) # results in -123456792

That’s expected due to the limited numerical precision.
Take a look at Single-precision floating-point format - Wikipedia and the precision limits.
Your value lies between [-2**27, -2**26] and thus rounds to multiples of 8 in float32.

Rounding is everywhere, so a general warning won’t be useful. I thus assume you are interested in a warning if an integer value is transformed into a floating point one and cannot be represented?
If so, feel free to create a feature request on GitHub so that the code owners can discuss it.