Hi,
I am trying to understand convTransposed2d outside of an nn module, i.e. reconstructing an input using conv2d and convTransposed2d consecutively. I wrote the code below, admittedly pretty naive, but I expect it to work. Well, it does not. Can you tell me where I got it wrong?
Thanks,
import torch.nn as nn
import torch.autograd as autograd
import torch
inp = [[1,2,3,4],
[5,6,7,8],
[9,10,11,12],
[13,14,15,16]]
w = [[1,2,3],
[4,5,6],
[7,8,9],
]
mconv = nn.Conv2d(1, 1, kernel_size = 3, stride=1 ,bias = False )
dum = torch.FloatTensor(inp)
dum = dum.unsqueeze(0)
dum = dum.unsqueeze(0)
input = autograd.Variable(dum)
dum = torch.FloatTensor(w)
dum = dum.unsqueeze(0)
dum = dum.unsqueeze(0)
mconv.weight.data = dum
output = mconv(input)
mdconv = nn.ConvTranspose2d(1, 1, kernel_size = 3, stride=1 ,bias = False )
mdconv.weight.data = torch.transpose(mconv.state_dict()[âweightâ],2,3)
input_rec = mdconv(output)
print input,input_rec