Grid_sample() and affine_grid() float imprecision

The torch.nn.functional.grid_sample() function requires second argument “grid” to be normalized between -1 and 1, this leads to decimal values that can cause imprecisions in the sampled output.

Take below code as an example, where I simulate a small 3D image and then apply an identity transform on it using grid_sample(), and show that the output is not equal to the input:

D, H, W = (7,7,7)
shape = lambda g,c,r:((g[0,:,:,:,0] - c[0])/r[0])**2 + ((g[0,:,:,:,1] - c[1])/r[1])**2 + ((g[0,:,:,:,2] - c[2])/r[2])**2 <= 1
grid = torch.nn.functional.affine_grid(torch.eye(3, 4, device=device).unsqueeze(0), (1, 1, D, H, W), align_corners=True)

oval = shape(grid,[0.0,0.0,0.0],[0.4,0.4,0.6])
sphere = shape(grid,[0.0,0.0,0.0],[0.8,0.8,0.8])

intensity1 = -700.0
intensity2 = 150.0
background = -1024.0

image = torch.where(oval,intensity1,torch.where(sphere,intensity2,background)).unsqueeze(0).unsqueeze(0)

image_id = torch.nn.functional.grid_sample(image, grid,align_corners=True)

print((img2 == img1).all())

Converting tensor types to float64 does not change the output. Neither does, align_corners = True or False.
This becomes a big problem when such small errors accumulate in bigger image sizes. Do you have any solutions or advice?