How to get/modify gradients of an encoder in a multi-output model with two losses?

Hi!

I am working on a project and decided to use Pytorch for the first time. However, I am facing a challenge in modifying gradients for some parameters in the model. I have read some earlier posts that are somewhat related but I still cannot manage it to work… I hope you are able to help me out.

To give some context, suppose we have an encoder where its hidden representation is fed into two MLP classifiers (i.e. multi-output model). Classifier F addresses a multi-class classification task and is trained to minimize loss_f. Classifier A, on the other hand, uses the same hidden representation from the encoder to tackle a binary classification problem by minimizing loss_a .

In this setting, the parameters in both classifier F and A are trained by following the standard backpropagation with .backward() and optim.step().

However, the parameters (W_enc) in the encoder must be updated by modifying the gradients as follows:

d_loss_f/d_W_enc  - d_loss_a/d_W_enc - [some modified gradient]

Where [some modified gradient] is a modified gradient that must be subtracted from d_loss_f/d_W_enc. So my question is:

  • How can I implement this gradient modification to update the parameters in the encoder?

I have read the following posts:

But, in these cases, losses were added to each other or all parameters were adjusted the same way which is not my goal.

I hope someone can help me with this or at least point me in the right direction. If more information or any clarification is needed, please let me know! Many thanks in advance.