How to fix RuntimeError: size mismatch?

I receive an error:

RuntimeError: size mismatch, m1: [3072 x 32], m2: [3072 x 32] at /opt/conda/conda-bld/pytorch_1587428091666/work/aten/src/THC/generic/THCTensorMathBlas.cu:283

when I’m trying to add classifier to autoencoder:

class ConvAutoencoder(nn.Module):
    def __init__(self):
        super(ConvAutoencoder, self).__init__()

        # Encoder
        self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
        self.conv2 = nn.Conv2d(16, 4, 3, padding=1)
        self.pool = nn.MaxPool2d(2, 2)

        # Decoder
        self.t_conv1 = nn.ConvTranspose2d(4, 16, 2, stride=2)
        self.t_conv2 = nn.ConvTranspose2d(16, 3, 2, stride=2)

        # Classifier
        self.classifier = nn.Sequential(nn.Linear(32 * 32 * 3, 32))


    def forward(self, x):
        x = F.relu(self.conv1(x))
        x = self.pool(x)
        x = F.relu(self.conv2(x))
        x = self.pool(x)
        x = F.relu(self.t_conv1(x))
        x = F.sigmoid(self.t_conv2(x))
        print(x.shape)
        out = self.classifier(x)

        return x, F.log_softmax(out)

I changed shapes in nn.Sequential(nn.Linear(32 * 32 * 3, 32)) so it would be the same size, but still getting mismatch error. why?

Based on the error message, the activation input has 32 features so you would need to use in_features=32 in the linear layer:

lin = nn.Linear(32 * 32 * 3, 32)
x = torch.randn(32 * 32 * 3, 32)
out = lin(x)
> RuntimeError: mat1 and mat2 shapes cannot be multiplied (3072x32 and 3072x32)


# works
lin = nn.Linear(32, 32)
out = lin(x)

However, it also seems you are not flattening the activation before feeding it to the linear layer, so make sure this is indeed the wanted behavior.