Unexpected behavior when turing float tensor to int or long

Hi, I used .long() or torch.floor to convert float tensor to int, but got out unexpected behavior :

import torch

origin = torch.tensor([2.84, 2.84, 2.84], device='cuda')
v_coords = torch.stack(torch.meshgrid([torch.arange(10, device='cuda')]*3)).reshape(3, -1).T.float()
voxel_size = 0.04
v_coords = v_coords[:5]
print(v_coords, v_coords.dtype)
w_coords = v_coords * voxel_size + origin
v_coords =(w_coords - origin) / voxel_size
print(v_coords, v_coords.dtype)

test_1 = torch.floor(v_coords)
print(test_1, test_1.dtype)
test_1 = test_1.long()
print(test_1, test_1.dtype)
test_2 = v_coords.long()
print(test_2, test_2.dtype)

test_3 = torch.floor_divide(w_coords - origin, voxel_size)
print(test_3, test_3.dtype)

Here’s the result:

tensor([[0., 0., 0.],
        [0., 0., 1.],
        [0., 0., 2.],
        [0., 0., 3.],
        [0., 0., 4.]], device='cuda:0') torch.float32
tensor([[0.0000, 0.0000, 0.0000],
        [0.0000, 0.0000, 1.0000],
        [0.0000, 0.0000, 2.0000],
        [0.0000, 0.0000, 3.0000],
        [0.0000, 0.0000, 4.0000]], device='cuda:0') torch.float32
tensor([[0., 0., 0.],
        [0., 0., 0.],
        [0., 0., 1.],
        [0., 0., 2.],
        [0., 0., 4.]], device='cuda:0') torch.float32
tensor([[0, 0, 0],
        [0, 0, 0],
        [0, 0, 1],
        [0, 0, 2],
        [0, 0, 4]], device='cuda:0') torch.int64
tensor([[0, 0, 0],
        [0, 0, 0],
        [0, 0, 1],
        [0, 0, 2],
        [0, 0, 4]], device='cuda:0') torch.int64
tensor([[0., 0., 0.],
        [0., 0., 0.],
        [0., 0., 1.],
        [0., 0., 2.],
        [0., 0., 4.]], device='cuda:0') torch.float32

If I set the origin is float64:

tensor([[0., 0., 0.],
        [0., 0., 1.],
        [0., 0., 2.],
        [0., 0., 3.],
        [0., 0., 4.]], device='cuda:0') torch.float32
tensor([[0.0000, 0.0000, 0.0000],
        [0.0000, 0.0000, 1.0000],
        [0.0000, 0.0000, 2.0000],
        [0.0000, 0.0000, 3.0000],
        [0.0000, 0.0000, 4.0000]], device='cuda:0', dtype=torch.float64) torch.float64
tensor([[0., 0., 0.],
        [0., 0., 0.],
        [0., 0., 1.],
        [0., 0., 2.],
        [0., 0., 3.]], device='cuda:0', dtype=torch.float64) torch.float64
tensor([[0, 0, 0],
        [0, 0, 0],
        [0, 0, 1],
        [0, 0, 2],
        [0, 0, 3]], device='cuda:0') torch.int64
tensor([[0, 0, 0],
        [0, 0, 0],
        [0, 0, 1],
        [0, 0, 2],
        [0, 0, 3]], device='cuda:0') torch.int64
tensor([[0., 0., 0.],
        [0., 0., 0.],
        [0., 0., 1.],
        [0., 0., 2.],
        [0., 0., 3.]], device='cuda:0', dtype=torch.float64) torch.float64

Could you explain what’s unexpected, as I’m getting the same results on the CPU and the GPU:

tensor([[0., 0., 0.],
        [0., 0., 1.],
        [0., 0., 2.],
        [0., 0., 3.],
        [0., 0., 4.]], device='cuda:0') torch.float32
tensor([[0.0000000000, 0.0000000000, 0.0000000000],
        [0.0000000000, 0.0000000000, 0.9999990463],
        [0.0000000000, 0.0000000000, 1.9999980927],
        [0.0000000000, 0.0000000000, 2.9999971390],
        [0.0000000000, 0.0000000000, 4.0000019073]], device='cuda:0') torch.float32
tensor([[0., 0., 0.],
        [0., 0., 0.],
        [0., 0., 1.],
        [0., 0., 2.],
        [0., 0., 4.]], device='cuda:0') torch.float32
tensor([[0, 0, 0],
        [0, 0, 0],
        [0, 0, 1],
        [0, 0, 2],
        [0, 0, 4]], device='cuda:0') torch.int64
tensor([[0, 0, 0],
        [0, 0, 0],
        [0, 0, 1],
        [0, 0, 2],
        [0, 0, 4]], device='cuda:0') torch.int64
tensor([[0., 0., 0.],
        [0., 0., 0.],
        [0., 0., 1.],
        [0., 0., 2.],
        [0., 0., 4.]], device='cuda:0') torch.float32

Hi, thanks for your reply.
I mean I expect to get the same result between the original v_coords and test_1,2,3. When I use numpy with float64, they are all the same, but with float32, the results above occurs.
And “unexpected behavior” probably has something to do with floating point numbers, not floor. Apologize.