I can't figure out how to correct a size mismatch


I can’t figure out how to correct a size mismatch. I’m trying to concat numerical and categorical embeddings with the 1000 feature output of a VGG-19. Can any one point to the size mismatch error?

Error below:

Traceback (most recent call last):
  File "inspection_model.py", line 463, in <module>
    train_roof_df, test_roof_df,train_losses, test_losses = roof_run()
  File "inspection_model.py", line 417, in roof_run
    outputs = roof_model(image, numerical_data, categorical_data)
  File "C:\Users\JORDAN.HOWELL.GITDIR\AppData\Local\Continuum\anaconda3\envs\torch_env\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "inspection_model.py", line 377, in forward
    x = F.relu(self.fc2(x))
  File "C:\Users\JORDAN.HOWELL.GITDIR\AppData\Local\Continuum\anaconda3\envs\torch_env\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "C:\Users\JORDAN.HOWELL.GITDIR\AppData\Local\Continuum\anaconda3\envs\torch_env\lib\site-packages\torch\nn\modules\container.py", line 100, in forward
    input = module(input)
  File "C:\Users\JORDAN.HOWELL.GITDIR\AppData\Local\Continuum\anaconda3\envs\torch_env\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "C:\Users\JORDAN.HOWELL.GITDIR\AppData\Local\Continuum\anaconda3\envs\torch_env\lib\site-packages\torch\nn\modules\linear.py", line 87, in forward
    return F.linear(input, self.weight, self.bias)
  File "C:\Users\JORDAN.HOWELL.GITDIR\AppData\Local\Continuum\anaconda3\envs\torch_env\lib\site-packages\torch\nn\functional.py", line 1370, in linear
    ret = torch.addmm(bias, input, weight.t())
RuntimeError: size mismatch, m1: [1 x 1049], m2: [1000 x 1049] at C:\w\1\s\tmp_conda_3.7_100118\conda\conda-bld\pytorch_1579082551706\work\aten\src\TH/generic/THTensorMath.cpp:136

Here is the model:

class Image_Model(nn.Module):
    def __init__(self, embedding_size):
        self.all_embeddings = nn.ModuleList([nn.Embedding(ni, nf) for ni, nf in embedding_size])
        self.embedding_dropout = nn.Dropout(p = 0.04)
        self.cnn = models.vgg19(pretrained=True)
        for param in self.cnn.parameters():
            param_requires_grad = False
        n_features = self.cnn.classifier[6].out_features

        self.fc2 = nn.Sequential(nn.Linear(n_features, 1049))
        self.fc3 = nn.Sequential(nn.Linear(1049, 256))
        self.fc5 = nn.Dropout(p = 0.04)
        self.fc9 = nn.Sequential(nn.Linear(256, 2))
    def forward(self, image, numerical_columns, cat_columns):
        embeddings = []
        for i, e in enumerate(self.all_embeddings):
        cat_embedd = torch.cat(embeddings, 1)
        x = self.cnn(image)

        x = torch.cat((x, numerical_columns), dim = 1)
        x = torch.cat((x, cat_embedd), dim = 1)
        x = F.relu(self.fc2(x))
        x = self.fc3(x)

        x = self.fc5(x)

        x = F.relu(self.fc9(x))
        x = F.log_softmax(x)
        return x

I tried a layer of (1, 1049) and that didn't work either.  

Thanks for any help someone can point out.

Hey Jordan, your fc2 looks like it requires an input of size of 1000. In your forward, it looks like your input x that you are passing may be a bit larger? See if you can print the size of x (x.size()) before you pass it to fc2 and then adjust your input definition of fc2 to match (input is currently set to “n_features” which is likely 1000). Hope that helps!