How the pytorch freeze network in some layers, only the rest of the training?

Yes, it does work when you add the parameters with requires_grad=True to the optimizer then setting to False after. You can also find out yourself by commenting out

optimizer = optim.SGD(filter(lambda p: p.requires_grad, net.parameters()), lr=0.1)

In the snippet above, since the previous optimizer contains all parameters including the fc2 with the changed requires_grad flag.

Note that the above snippet assumed a common “train => save => load => freeze parts” scenario.