Questions about fine tunning in faster RCNN

Hi familiy.
I am following the tutorial from oficial pytorch website, where teaches how to fine tunning with mask-rcnn
https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html

I have a question in the Main function (which is little down below).
You will see this code

params = [p for p in model.parameters() if p.requires_grad]

As far i know, when we need do fine tuning classification models (like VG 16), we freeze every layer except the last one. Why here we use this code?

I have already readed the official paper of the faster-rcnn where it described until 3 different ways to train this architecture

Please i will appreciate any explication :smiley:

PDT: this is my first project with this kind of architecture

This line of code stores all parameters, which require gradients and will thus be trained, into a list.
This list is then passed to the optimizer. You could also pass all parameters to the optimizer and let the check internally, if valid gradients were calculated for each parameter or if this parameter should be skipped.

The posted code snippet is a good way to write explicit code in my opinion.

1 Like

Thanks for your answer, i appreciate i too much but my doubt hasnt been solved yet :slight_smile:

After of reading the oficial Faster RCNN paper
The documents describe 3 different methods

The method that did you explained me is one of the training method explained in the paper?
If not, why Pytorch is using this method of training?

Sorry whether i am asking something trivial but now i am in a point where i beginning to understand lot of things (and questions come)