I have a NN with this type of structure:
Where the L_est_A is a loss computed from A_init through L_estimate function, L_hat is the loss computed from random actions through L_estimate. L_estimate is a learned function.
In summary, what this net does is that it simultaneously learns an approximation of the loss of another network and also learns a conv_net that outputs some actions that minimizes the approximated loss, L_est_A.
My question is: I have separate optimizers for L_estimate and L_hat, and I would want L_est_A to only change the weight in conv_net when doing backward(), and L_hat only change the weight of L_estimate when doing backward(). But I didn’t find a way to specify which optimizer to use, for different losses. Any help is appreciated, thanks.