Relu throwing a size mismatch error

Hello,

I added a a relu activation function in my forward pass and now the model is throwing a size mismatch error. I’m not sure how it’s messing up the sizes.

Here is my model object:


    def __init__(self, embedding_sizes, p = .4):
        '''
        Args
        ---------------------------
        embedding_size: Contains the embedding size for the categorical columns
        num_numerical_cols: Stores the total number of numerical columns
        output_size: The size of the output layer or the number of possible outputs.
        layers: List which contains number of neurons for all the layers.
        p: Dropout with the default value of 0.5
        
        '''
        super(Image_Embedd,  self).__init__()
        
        self.all_embeddings = nn.ModuleList([nn.Embedding(ni, nf) for ni, nf in embedding_sizes])
        self.embedding_dropout = nn.Dropout(p)
        
        
        self.cnn = models.resnet50(pretrained=False).cuda()
        
        self.cnn.fc = nn.Linear(self.cnn.fc.in_features, 1000)
        self.fc1 = nn.Linear(1000, 1077)
        self.fc2 = nn.Linear(1077, 128)
        self.fc3 = nn.Linear(128, 2)
        
        
    #define the foward method
    def forward(self, image, x_numerical, x_categorical):
        
        embeddings = []
        for i, e in enumerate(self.all_embeddings):
            embeddings.append(e(x_categorical[:,i]))
            
        x = torch.cat(embeddings, 1)
        x = self.embedding_dropout(x)
        x1 = self.cnn(image)
        x2 = numerical_data
        
        x3 = torch.cat((x1, x2), dim = 1)
        x4 = torch.cat((x, x3), dim = 1)
        x4 = self.fc2(x4)
        x4 = F.relu(self.fc2(x4))
        x4 = self.fc3(x4)
        x4 = F.log_softmax(x4)
        return x4```

And below is the error.


RuntimeError Traceback (most recent call last)
in
19 break
20
—> 21 y_pred = combined_model(image, numerical_data, categorical_data)
22 single_loss = criterion(y_pred, label)
23

C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
–> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)

in forward(self, image, x_numerical, x_categorical)
41 x4 = torch.cat((x, x3), dim = 1)
42 x4 = self.fc2(x4)
—> 43 x4 = F.relu(self.fc2(x4))
44 x4 = self.fc3(x4)
45 x4 = F.log_softmax(x4)

C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
–> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)

C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\linear.py in forward(self, input)
85
86 def forward(self, input):
—> 87 return F.linear(input, self.weight, self.bias)
88
89 def extra_repr(self):

C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\functional.py in linear(input, weight, bias)
1368 if input.dim() == 2 and bias is not None:
1369 # fused op is marginally faster
-> 1370 ret = torch.addmm(bias, input, weight.t())
1371 else:
1372 output = input.matmul(weight.t())

RuntimeError: size mismatch, m1: [10 x 128], m2: [1077 x 128] at C:/w/1/s/windows/pytorch/aten/src\THC/generic/THCTensorMathBlas.cu:290


When I comment the relu out, it runs fine.  How should I properly code the relu in the forward pass?

It seems you are applying self.fc2 twice on x4. Could this be the issue? Does your code run, if you comment the line42?

Yep! Thank you for the catch.