Problem in reconstructing an input by convTransposed2d

Hi,

I am trying to understand convTransposed2d outside of an nn module, i.e. reconstructing an input using conv2d and convTransposed2d consecutively. I wrote the code below, admittedly pretty naive, but I expect it to work. Well, it does not. Can you tell me where I got it wrong?

Thanks,

import torch.nn as nn
import torch.autograd as autograd
import torch

inp = [[1,2,3,4],
[5,6,7,8],
[9,10,11,12],
[13,14,15,16]]

w = [[1,2,3],
[4,5,6],
[7,8,9],
]
mconv = nn.Conv2d(1, 1, kernel_size = 3, stride=1 ,bias = False )
dum = torch.FloatTensor(inp)
dum = dum.unsqueeze(0)
dum = dum.unsqueeze(0)
input = autograd.Variable(dum)

dum = torch.FloatTensor(w)
dum = dum.unsqueeze(0)
dum = dum.unsqueeze(0)
mconv.weight.data = dum

output = mconv(input)

mdconv = nn.ConvTranspose2d(1, 1, kernel_size = 3, stride=1 ,bias = False )
mdconv.weight.data = torch.transpose(mconv.state_dict()[‘weight’],2,3)

input_rec = mdconv(output)

print input,input_rec

I run you code, and got: (‘weight’ change to 'weight')

Variable containing:
(0 ,0 ,.,.) = 
   1   2   3   4
   5   6   7   8
   9  10  11  12
  13  14  15  16
[torch.FloatTensor of size 1x1x4x4]
 Variable containing:
(0 ,0 ,.,.) = 
    348   1785   4008   2751
   1224   5211  10737   7155
   2100   7053  12579   8121
   1584   4887   8190   5157
[torch.FloatTensor of size 1x1x4x4]

Maybe your question is about “what deconv operation” do (It’s not b=Conv(a) a=Deconv(b)),you can reference Deconv

1 Like

Thank you for your response. I did not expect to reconstruct the exact input, but I expected to obtain something similar. But you are right, it is a concept question which is not confined to pytorch per se.