Freezing layers for transfer learning

Hi, guys,
My model is ShuffleNet_V2_X0_5_Weights.IMAGENET1K_V1
I m trying to freeze layers using this code:

for param in model.parameters():
param.requires_grad = False

But it doesn’t work. I recieve:

‘RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn’

So I found this code:

for params in list(model.parameters())[0:-5]:
params.requires_grad = False
It works but I don’t understand what [0:-5] means in this case.

And I tried:

for param in model.features.parameters():
param.requires_grad = False

My model doesn’t have attribute ‘features’. So how should I freeze layers right way?

You cannot freeze all parameters are expect backward() to work and calculate gradients.
The indexing e.g. via [0:-5] is used to freeze all parameters besides the last 5 (whatever these are depends on your model).
The last code snippet expects your model to contain a .features attribute, which is again model-dependent.
Check which layers are defined in your model, which should be frozen, and access them directly via their attributes.