When training 2 models simultaneously, how to stop training on one and continue on the other?

i am training two models m1 and m2 by passing their parameters together to the optimizer:

params = list(m1.parameters()) + list(m2.parameters())

torch.optim.SGD(params, lr=0.01)

at one point i want to continue training only on m1.
how to remove m2 from the training?
should i set requires grad to false? or change the params property of the optimizer directly?
or recreate the optimizer?

thanks.

Freezing the parameters should work, if the optimizer does not use any running stats (your simple SGD optimizer does not use running stats unlike e.g. Adam) and if the gradients are set to None, which is the default behavior in newer PyTorch releases (this would also allow you to use optimizers with running stats since updates for parameters with None gradients will be skipped).
Recreating the stateless optimizer should also work, but you can already see that the best approach depends on your setup and if the optimizer uses running stats.

thanks will try them both.
any way to check that it worked? like check somehow the numbers of parameters that have been updated on a step, to see the before freezing and after freezing count?

You could create copies of (some of) the parameters and compare them after calling the step operation.