How does autograd handle multiple objectives?

Using the pytorch framework.

Suppose you have 4 NN modules of which 2 share weights such that one objective relies on the computation of 3 NN modules (including the 2 that share weights) and the other objective relies on the computation of 2 NN modules of which only 1 belongs to the weight sharing pair, the other module is not used for the first objective.

What would the optimisation step in this scenario entail? With efficiency in mind.

1 Like

4 NN modules of which 2 share weights

In this case, you only have 3 NN modules, and one of them is simply reused.

If you have multiple objectives that you want to backprop, you can use:
autograd.backward http://pytorch.org/docs/autograd.html#torch.autograd.backward

You give it the list of losses and grads.

The optimization step is pretty standard, you give the all the modules’ parameters to a single optimizer.

So just to be clear, specify a single objective that merges (concat) all the sub-objectives and backward() on it? There won’t be any issue regarding going over the same variables twice through different pathways?

Thanks.

So just to be clear, specify a single objective that merges all the sub-objectives and backward() on it?

Yes.

There won’t be any issue regarding going over the same variables twice through different pathways?

No issues.