Accuracy low after adding FC layers in parallel

I’m using ResNet50 and wanted to add FC layers in parallel
I’m constructing my architecture like this:

class ResNet50(nn.Module):
    def __init__(self, **kwargs):
        super(ResNet50,self).__init__()
        resnet50 = torchvision.models.resnet50(pretrained=True)
        self.base= nn.Sequential(*list(resnet50.children())[:-2])
             

    def forward(self,x):
        x = self.base(x)
        x = F.avg_pool2d(x,x.size()[2:])
        f = x.view(x.size(0),-1)
        clf_outputs = {}

        return clf_outputs,f 
I'm getting this:

mAP: 20.7%

What could be the possible reason for this ?

Maybe you have to scale down your learning rate, since now you are summing both losses.
Could you try that and see, if the learning improves?

1 Like

My learning rate is already 1e-4. It was the same in both conditions. Does further scaling it down help ? And just to confirm this is how we add FC layers in parallel right ?

The linear layers should be alright. You can check the gradients, if you have doubts, but the code looks fine.
I think since your loss is now bigger, the learning rate might be too high.
It’s speculative, but might be worth a try.

I tried with a lower learning rate as well:
lr = 0.0000005 from 0.0003, still the loss stops going down after 50-60 epochs.