Why am i getting this error when both the labels and the output of the model have same dimensions?

Capture

class MyNet1(nn.Module):

def __init__(self):
    super(MyNet1, self).__init__()

    self.common = nn.Sequential(
        nn.Linear(64*64*3,32*32*3),
        nn.ReLU(),
        nn.Dropout(0.2),
        nn.Linear(32*32*3, 16*16*3),
        nn.ReLU(),
        nn.Dropout(0.2),
        nn.Linear(16*16*3,320),
        nn.Dropout(0.2) 
    )


    self.branch1 = nn.Sequential(
       nn.Linear(320,320),
       nn.ReLU(),
       nn.Dropout(0.2),
       nn.Linear(320,160),
       nn.ReLU(),
       nn.Dropout(0.2),
       nn.Linear(160,10)
       
    )

    self.branch2 =nn.Sequential(
       nn.Linear(320,320),
       nn.ReLU(),
       nn.Dropout(0.2),
       nn.Linear(320,160),
       nn.ReLU(),
       nn.Dropout(0.2),
       nn.Linear(160,10)          
    )


def forward(self, X):
        
        X = X.view(X.size(0),-1)
        
        X= self.common(X)
         

        op1= self.branch1(X)
        op2= self.branch2(X)
     
        
        return torch.cat([op1, op2], 1)

It expects the target to have dimensions (60,10), but you provide them as (60,2,10).

I think you have to give your targets as 2D map where each pixel has the value of the appropriate class (or just a vector with the correct classes, depending on what you’re doing)

But the outputs and labels have the same dimesnsions,i have printed both the dimensions. Please have a look

Yes and that is the problem :slight_smile:

They shouldn’t have the same dimensions for that loss function. The error tells you that the targets must be of size (60,10) and not (60,2,10).

Also look at: https://pytorch.org/docs/stable/nn.html#nllloss :slight_smile:

can you just post your full model? and y r u reshaping the final output?

Capturef

I have printed the labels size. So don’t you expect my model output and the labels must have the same dimensions? Then only they can be compared?

i have posted my whole model in code.
Actually every image belongs to one of the ten classes for two categories.(i.e. the labels size is [2,10], one class in each of the ten classes.)
And my model gives and output of (batch_size,20),Therefore i am reshaping it to (batch_size,2,10) so that is matches with the dimension of the label.
(the batch size is 60).

in forward function, change the return code

 op1= self.branch1(X)
 op2= self.branch2(X)
 return op1,op2

and while training:

outputs = model(inputs)
loss1 = loss_fn(outputs[0],labels[:,0,:])
loss2 = loss_fn(outputs[1],labels[:,1,:]
loss = loss1+loss2
loss.backward()
opt.step()

Hope this helps!