Here is a snippet of my test code:
data = torch.tensor([[[1.], [2.], [3.], [4.]]]) # N=1 * L=4 * C_in=1
data = data.permute(0, 2, 1)
conv = nn.Conv1d(1, 1, 2, bias=None)
print(conv.weight)
output = conv(data)
print("output : {}".format(output))
dweight = conv.weight.transpose(0, 1).flip(2, ) # dweight = conv.weight.transpose(0, 1)
print("dweight : {}".format(dweight))
dconv = F.conv_transpose1d(input=output, weight=dweight, bias=None)
print(dconv) # != data
The result I suppose it would be is: input -> conv1d(input) -> **conv_transpose1d(conv1d(input)) should be equal to input.
But they are not equal, whether I flip the temporal axis or not, but they are supposed to be the same, right?
I’m really confused and frustrated here, could anyone figure it out?