How to freeze the input layer and output fc Layer when during training


I am pretty new to this forum and pyTorch. Basically I am trying to use EfficientNet as a classifier to detect emotion in FER2013 database. Basically I have needs to modify the input (need it to be 1 channel , 48x48 for image size) and the output (need to be 7). Here is my modification

class Classifier(nn.Net):
def init(self,n_classes):

    super(Classifier, self).__init__()
    self.effnet                  =  EfficientNet.from_pretrained('efficientnet-b0')
    self.effnet._fc.out_features = 7
    self.effnet._conv_stem = Conv2dStaticSamePadding(1,32,kernel_size=(3,3), stride=(2,2),

def forward(self, input):
    x = self.effnet(input)
    return x

model = Classifier()

I am trying to test out the pretrained network model(ImageNet) so that means I will need to train the input and last FC layer as I replaced them

Read about how to freeze layers, I am thinking of omething along this line

for child in model.children():
     ct += 1
     if ct < x and ct >1:
           for param in child.parameters():
                 param.requires_grad = False

However is this the right method? Don no see the requires_grad param anywhere?


Yes, you can freeze parameters by setting their requires_grad attribute to False as also described in the Fine Tuning Tutorial (including more useful details about fine tuning a model :wink: ).

1 Like