Conv2dtranspose with output_padding produce same result in different batch

input = autograd.Variable(torch.randn(2, 8, 4, 1))
downsample = nn.Conv2d(8, 16, (3,1), stride=(2,1), padding=0)
upsample = nn.ConvTranspose2d(16, 8, (3,1), stride=(2,1),output_padding=(1,0))
h = downsample(input)
print(input.size())
print(h.size()) # (2, 16, 1, 1)
output = upsample(h)
print(output.size()) # (2, 8, 4, 1)

I am trying to use convtranspose to convert a (16,1,1) to (8,4,1)
however, in different batch, the last value are the same. Is this normal?

output[0,0,:,:]
Variable containing:
-0.1138
0.2510
0.2639
0.0946
[torch.FloatTensor of size 4x1]
output[1,0,:,:]
Variable containing:
-0.0865
0.2529
-0.0836
0.0946
[torch.FloatTensor of size 4x1]

Can anyone help??? Thanks

I think the problem is coming from the output_padding. If there is no output_padding, it is fine. However, I need to reconstruct the same size. What is output_padding really doing? why does it create same output from different batch???