Freeze first basic block of sequential layers (layer 1) in a ResNet model

Is it possible to freeze only the first basic block of sequential layers (layer 1) in a ResNet model?

num_ftrs = model.fc.in_features
model.fc = nn.Linear(num_ftrs, 2)
child_counter = 0
for child in model.children():
    if child_counter < 4:
        print("child ",child_counter," was frozen")
        for param in child.parameters():
            param.requires_grad = False
    elif child_counter == 4:
      children_of_child_counter = 0
      for children_of_child in child.children():
          if children_of_child_counter < 1:
              for param in child.parameters():                
                  param.requires_grad = False
              print('child ', children_of_child_counter, 'of child',child_counter,' was frozen')
            print('child ', children_of_child_counter, 'of child',child_counter,' was not frozen')
          children_of_child_counter += 1
      print("child ",child_counter," was not frozen")
    child_counter += 1         
summary(model, (3, 224, 224))

The print statement is ok. However, in summary, it is shown that the whole Layer1 is freezing.
Please help.
Thanks in advance.

What is the code for your summary function?

Trainable params: 11,532,008
Non-trainable params: 157,504

This doesn’t answer my question.

Is summary a function that you use from some library? If so, could you point me to the documentation for this function?

If summary is defined in your own code, could you share the code for this function?

Without knowing what summary is supposed to print, I cannot make sense from its output.

from torchsummary import summary

Summarize the given PyTorch model. Summarized information includes:
    1) Layer names,
    2) input/output shapes,
    3) kernel shape,
    4) # of parameters,
    5) # of operations (Mult-Adds)

    model (nn.Module):
            PyTorch model to summarize. The model should be fully in either train()
            or eval() mode. If layers are not all in the same mode, running summary
            may have side effects on batchnorm or dropout statistics. If you
            encounter an issue with this, please open a GitHub issue.

    input_data (Sequence of Sizes or Tensors):
            Example input tensor of the model (dtypes inferred from model input).
            - OR -
            Shape of input data as a List/Tuple/torch.Size
            (dtypes must match model input, default is FloatTensors).
            You should NOT include batch size in the tuple.
            - OR -
            If input_data is not provided, no forward pass through the network is
            performed, and the provided model information is limited to layer names.
            Default: None

    batch_dim (int):
            Batch_dimension of input data. If batch_dim is None, the input data
            is assumed to contain the batch dimension.
            WARNING: in a future version, the default will change to None.
            Default: 0

    branching (bool):
            Whether to use the branching layout for the printed output.
            Default: True

    col_names (Iterable[str]):
            Specify which columns to show in the output. Currently supported:
            ("input_size", "output_size", "num_params", "kernel_size", "mult_adds")
            If input_data is not provided, only "num_params" is used.
            Default: ("output_size", "num_params")

    col_width (int):
            Width of each column.
            Default: 25

    depth (int):
            Number of nested layers to traverse (e.g. Sequentials).
            Default: 3

    device (torch.Device):
            Uses this torch device for model and input_data.
            If not specified, uses result of torch.cuda.is_available().
            Default: None

    dtypes (List[torch.dtype]):
            For multiple inputs, specify the size of both inputs, and
            also specify the types of each parameter here.
            Default: None

    verbose (int):
            0 (quiet): No output
            1 (default): Print model summary
            2 (verbose): Show weight and bias layers in full detail
            Default: 1

    *args, **kwargs:
            Other arguments used in `model.forward` function.

    ModelStatistics object
            See torchsummary/ for more information.

Thank you for the reference.

I tried running the code from your original post, and got the following error:

RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 7, 7], but got 3-dimensional input of size [3, 224, 224] instead

Since I can’t run the code, I can’t debug it either.