Freezing the last few layer causes error in backpropagation

I have a nn network which has two parts, one is the backbone model, and one is a branch called FiLM_gen.
The architecture looks like:

As i want to freeze all the parameters in backbone model, and only update the ones in the branch, so i wrote :

    for param in model.module.parameters():
        param.requires_grad = False
    for param in model.module.FiLM_gen.parameters():
        param.requires_grad = True

but after loss.backward(), i got error like:

Traceback (most recent call last):
  File "train.py", line 216, in <module>
    loss.backward()
  File "/users/../anaconda3/lib/python3.6/site-packages/torch/tensor.py", line 93, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/users/../anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py", line 90, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

I have trained the backbone model before, there is no problem with back-propagation, so i guess the functions are all differentiable. The error should be caused by freezing the layers.

So i tried to set ‘requires_grad’ of the last linear layer to be True, and the loss was back-propagated successfully. Could anyone tell me if it is possible to back-propagate with the last layer frozen, and if so how could i do it.
Thanks

The I found the reason.
If your final model output is the product of tensors all with requires_grad = False, then it is impossible to do the back-propagation.
I had this problem as I accidentally removed the connection between my branch and model.

1 Like