Tensor size mismatch autoencoder pytorch

I’m using stacked Autoencoder, which is a bunch of Conv layers. However, I’m having a tensor mismatch error, and I’m not sure about the reason. Everything done in the Encoder is reversed in the Decoder!

This is for time-series data. Input shape is (bactch_size, 1, 3000)
Here’s the code:

lass CDAutoEncoder(nn.Module):
    def __init__(self, input_size, output_size, kernel, stride):
        super(CDAutoEncoder, self).__init__()

        self.forward_pass = nn.Sequential(
            nn.Conv1d(input_size, output_size, kernel_size=kernel, stride=stride, padding=0),
        self.backward_pass = nn.Sequential(
            nn.ConvTranspose1d(output_size, input_size, kernel_size=kernel, stride=stride, padding=0),

    def forward(self, x):
        y = self.forward_pass(x)
        return y

    def reconstruct(self, x):
        return self.backward_pass(x)

class StackedAutoEncoder(nn.Module):
    def __init__(self):
        super(StackedAutoEncoder, self).__init__()

        self.ae1 = CDAutoEncoder(1, 32, 50, 10)
        self.ae2 = CDAutoEncoder(32, 64, 10, 3)
        self.ae3 = CDAutoEncoder(64, 64, 5, 1)

    def forward(self, x):
        a1 = self.ae1(x)
        a2 = self.ae2(a1)
        a3 = self.ae3(a2)

        return self.reconstruct(a3)

    def reconstruct(self, x):
        a2_reconstruct = self.ae3.reconstruct(x)
        a1_reconstruct = self.ae2.reconstruct(a2_reconstruct)
        x_reconstruct = self.ae1.reconstruct(a1_reconstruct)
        return x_reconstruct

The error:

RuntimeError: The size of tensor a (2990) must match the size of tensor b (3000) at non-singleton dimension 2

I’ve tried adding padding and it worked, but when I changed the kernel size I get different tensor-size-mismatch-error. Also, same error appears if I tried to add AvgPool1d(2) layer … I use nn.Upsample(scale_factor=2) in the decoder, but same mismatch error appears.

Apparently, there’s nothing like ‘same’ padding, so is there automated solution for this?

You probably want padding in your convolutions to not lose timesteps.
The typical thing is to have odd kernel sizes and a padding of kernel_size // 2.

Best regards


Thank you for your reply, but this doesn’t help in my problem. I have already tried it, but for some reason it gives the same error