Unexpected behavior of grid_sample

As the docs described, the grid_sample performs value sampling in the closed range [-1, 1], i.e., values x = -1, y = -1 is the left-top pixel of input, and values x = 1, y = 1 is the right-bottom pixel of input. But when I perform grid_sample on a volumetric (5D) input, the result is totally unexpected. The code snippet is as follow:

import torch
import torch.nn.functional as F

data_dict = torch.load('data_dict.pth')

prob = data_dict['prob'].unsqueeze(0).unsqueeze(0)
sm = data_dict['sm'].unsqueeze(0).unsqueeze(0)

ret = F.grid_sample(prob, sm, mode='nearest')

print('--------result of zero padding----------')
print(ret[0, 0, 0, 0, 0])   # sampling result of (0, 0, 0)
print(sm[0, 0, 0, 0, :])    # samping coord of (0, 0, 0)
print(prob[0, 0, -1, 0, 0]) # samping target of (0, 0, 0)

ret = F.grid_sample(prob, sm, mode='nearest', padding_mode='border')

print('--------result of border padding----------')
print(ret[0, 0, 0, 0, 0])   # sampling result of (0, 0, 0)
print(sm[0, 0, 0, 0, :])    # samping coord of (0, 0, 0)
print(prob[0, 0, -1, 0, 0]) # samping target of (0, 0, 0)

Its running results are as follow:

--------result of zero padding----------
tensor(0.)
tensor([-1., -1.,  1.])
tensor(0.9999)
--------result of border padding----------
tensor(0.9999)
tensor([-1., -1.,  1.])
tensor(0.9999)

The result of ret[0, 0, 0, 0, 0] should find value from (-1, -1, 1) of prob which equals to 0.999, but the sampling result is 0. I guess this may be caused by kind of out-of-bound behavior, thus, I set the padding mode to border and the result makes sense.

What makes me confused is that where is the out-of-bound behavior comes from? Does it come from float point accuracy loss? If so, how can we ensure the sampling results are correct at the boundaries?

Thanks.

Could you post example tensors for prob and sm which would reproduce the unexpected behavior?