I’ve got a small question regarding fine tuning a model i.e. How can I download a pre-trained model like VGG and then use it to serve as the base of any new layers built on top of it. In Caffe there was a ‘model zoo’, does such a thing exist in PyTorch?
I went through that thread and got a pretty good idea of how to fine-tune stuff, however is there a way to manually remove a layer of the pre-trained network? From what I’ve learnt, even if I set the required_grad in the layers I don’t want in my graph, they still will have the pre-trained weights in them. So essentially I want to do something like this(in pseudo-code):
That’s a nice tutorial. However, finetuning in PyTorch is about understanding what a computational graph does. So after reading the docs, it is clear that in order to finetune the gradients must not be backpropagated back to the pre-trained weights. So if you’re finetuning off say vgg you can do something like this:
Usually, it’s a matter of choice in fine tuning to decide how many layers are frozen. Most people tend to freeze most layers because it slows the system down. Ideally, it’s best if layers are not frozen. Which is why I left it like that on purpose!