Related to optimizer and loss function in transfer learning

If I change the requires_grad to True or False midway after a few training epochs, do I need to reinitialize the optimizer or loss function?

Setting the .requires_grad=False attributes of (some) parameters will avoid calculating the gradient for these parameters.
Since the optimizer would still have the references to these parameters, it would perform the step() operation on zero gradients and would in the simplest case basically “skip” these parameters.

However, if the optimizer uses some internal estimates for each parameter or you’ve been using weight decay before, these parameters might still be updated even if they get a zero gradient.
If that’s not desired, you would have to reinitialize the optimizer (and try to restore the running estimates of the remaining parameters).