The Inaccuracy of PyTorch Conv Calculation

After I run this code:

import torch
import torch.nn as nn

ta = torch.Tensor([[[[2., 4.], [0., 1.]]]])
up = nn.ConvTranspose2d(in_channels=1, out_channels=1, kernel_size=2, stride=1, padding=0)
with torch.no_grad():
    up.weight.data = torch.Tensor([[[[3., 1.], [1., 5.]]]])
    tb = up(ta)
print(tb)

I get the result:

tensor([[[[ 5.8427, 13.8427, 3.8427],
[ 1.8427, 16.8427, 20.8427],
[-0.1573, 0.8427, 4.8427]]]], grad_fn=)

Thematically, the answer should be:

tensor([[[[ 6., 14., 4.],
[ 2., 17., 21.],
[0., 1., 5.]]]], grad_fn=)

Why the inaccuracy of PyTotorch Calculation will happen?

Hi Ao!

You’re ignoring the bias term in ConvTranspose2d. Try:

up = nn.ConvTranspose2d (in_channels=1, out_channels=1, kernel_size=2, stride=1, padding=0, bias = False)

You will then obtain your expected result. (Or you could set up.bias to a
known quantity just as you do with up.weight.)

Best.

K. Frank

1 Like

I didn’t notice the bias of the Conv module before. Thanks a lot!