Add two optimizers in single model for two different path

I want to add two optimizers for my network, the network have two different paths one for loss1 and another for loss2.
first i want to update the loss1 with optimizer1 and after that update, i want to update loss2 (that is a function of loss1) with optimizer2.

please help me how to do that. @ptrblck @smth

So I’m neither of the two, but if you allow me to give a shot:

  1. there is no inherent need to define two optimizers for this, but nothing keeps you from doing so if you want.
  2. if you do this and have optimizers with state derived from gradient statistics (plain SGD hasn’t, SGD with momentum, Adam, … have), the optimizer will only see the gradient statistics of those used with its step. This could be desired (because it implicitly equilibrates the loss gradients when used with adaptive optimizers) or non-desired.
  3. The more common method is to make it a joint update.

Best regards

Thomas

P.S.: It is generally preferred to not tag people. While they invariably are have the best answers, they don’t scale arbitrarily and it’s best to have them as jokers for the hardest questions.

thanks for replying, i will take your valuable suggestions.
As model.parameters() will give all the parameters. But along with it I need one more optimizer which will update the these parameters along with the new one.

opt1 = optim.Adam(model.parameters(), lr=1e-4)
opt2= optim.Adam(other_loss, lr=1e-5)

Oh, you use the same parameters (vor a list oft filtered parameters).
You would geht the separate gradients by

opt1.zero_grads()
loss1.backward()
opt1.step()
opt2.zero_grads()
other_loss.backward()
opt2.step()

so that opt2 is ignorant of loss1’s gradient and opt1 of other_loss's.

Best regards

Thomas

thanks, it’s working.