Transfer learning

During Training, How to freeze intermediate layers of network architecture in transfer learning?

Just need to set requires_grad of the parameters of those intermediate layers to False. There’s an example here (in the context of Resnet 18):

http://pytorch.org/docs/notes/autograd.html#excluding-subgraphs

Thanks for the reply