Adding a fully connected layer in Resnet

If I want to add a fully connected layer after pooling in the Resnet, how can use setattr and getattr instead of this:

self.layer1 = nn.Linear(512, 512)
self.layer2 = nn.Linear(512, num_classes)

self.layer3 = nn.Linear(512, 512)
self.layer4 = nn.Linear(512, num_classes)


out = out.view(out.size(0), -1)
out1 = self.layer1(out)
out1 = F.relu(out1)
out1 = self.layer2(out1)

out2 = self.layer3(out)
out2 = F.relu(out2)
out2 = self.layer4(out2)

this one shows error,

self.num_ftrs = num_filters   
       for i in range(self.num_ftrs):
           setattr(self, "fc%d" % i, nn.Linear(512, 512))
           setattr(self, "fc%d" % i, nn.Linear(512, num_classes))


out = out.view(out.size(0), -1)
outputs = []      
for i in range(self.num_ftrs):
       out = getattr(self, "fc%d" % i)(out)
       out = F.relu(out)
       outputs.append(getattr(self, "fc%d" % i)(out))

What kind of error are you seeing?
Based on the provided code I would guess you might be running into a shape mismatch, as you are currently redefining fc%d in the first loop, which would create a stack of layers as:

self.fc1 = nn.Linear(512, num_classes)
self.fc2 = nn.Linear(512, num_classes)
self.fc3 = nn.Linear(512, num_classes)

Thank you for your reply, @ptrblck.
Yes, the error is shape mismatch. How can I solve it? I want to have a fully connected layer with Relu before each fully connected layer.

You could iterate all linear layers you wish to manipulate and replace them with your additional linear + relu and the original linear layer as an nn.Sequential container.

Thank you for your reply, @ptrblck.

I tried

self.layer5 = nn.Sequential(nn.Linear(512, 512),
                          nn.ReLU(),
                          nn.Linear(512, num_classes))
        
self.num_ftrs = 2   
for i in range(self.num_ftrs):
       setattr(self, "fc%d" % i, self.layer5)

and in the forward

clf_outputs = []      
for i in range(self.num_ftrs):
      clf_outputs.append(getattr(self, "fc%d" % i)(out))

but the output clf_outputs[0] and clf_outputs[1] were the same. The output is different if I change it to

self.layer5 = nn.Sequential(nn.Linear(512, 512),
                    nn.ReLU(),
                    nn.Linear(512, num_classes))

self.layer6 = nn.Sequential(nn.Linear(512, 512),
                    nn.ReLU(),
                    nn.Linear(512, num_classes))

and in forward

clf_outputs1 = self.layer5(out)
clf_outputs2 = self.layer6(out)
clf_outputs = [clf_outputs1, clf_outputs2]

But I want the first approach getting work, Is there any ways that I can keep using setattr, getattr and getting different outputs?