Question about torch.transforms.resize


with the following input and code, the output is obviously wrong.
is there anything wrong with my code about how to use torch.transforms.Resize?

Is there any detailed explanation about how torch.transforms.Resize is done?

image = []
for i in range(9):
    row = []
    for j in range(9):
image = np.array(image)
image = torch.unsqueeze(torch.from_numpy(image).type(torch.float), 0)

tempimage = transforms.Resize((4, 4))(image)

input 1-channel 9x9 image:


outout 1-channel 4x4 image


Could you describe in more detail how it is obviously wrong?
Quite likely, if the discussion in the documentation ( Resize — Torchvision main documentation ) is not enough, you’d want to look a the source code how bilinear interpolation is implemented.

Best regards



Let me describe the details:

for the left-top corner of the input image, the element are : 0, 1, 2, 10, 11, 12, but the corresponding element in output image are: 6.875.
In my example, it is a down-sampling case the resize value should be in-between the input values.

Could you lease tell the source code link so that I could have a look?

do you have any more comments about my questions?