RunTimeError on running transfer learning

Running this file facenet-pytorch/finetune.ipynb at master · timesler/facenet-pytorch · GitHub

I want to train only the final layer of

resnet = InceptionResnetV1(

I did freeze all layers but unfroze last layer

#Freeze all layers
for param in resnet.parameters():
    param.requires_grad = False

#Unfreeze the last layer
for param in resnet.logits.parameters():
    param.requires_grad = True 

I tried running the code I get this error

Valid |     2/2    | loss:    6.2345 | fps:  182.8512 | acc:    0.0000   

Epoch 1/100
RuntimeError                              Traceback (most recent call last)
<ipython-input-45-49d240c83dad> in <module>()
     19         resnet, loss_fn, train_loader, optimizer, scheduler,
     20         batch_metrics=metrics, show_running=True, device=device,
---> 21         writer=writer
     22     )

2 frames
/usr/local/lib/python3.7/dist-packages/torch/autograd/ in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
    145     Variable._execution_engine.run_backward(
    146         tensors, grad_tensors_, retain_graph, create_graph, inputs,
--> 147         allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

The logits layer will only be used, if self.classify is set to True as seen here, so I guess this might not be the case.
If so, this layer won’t be used while all other parameters are frozen, which would yield this error.