I want to construct a network architecture that will be trained with respect to 2 different losses.
One of the losses will be computed with respect to the output(y) of the network.
The other loss will be computed with respect to some variables(w) that are not output.
I have both ground truth labels for y and w variables.
I have a network(F) that is trained successfully. Its input is x and output is w.
I can generate a G network that instantiates F network.
G network does not need any learnable parameters.
G network will take the w outputs of F network and compute y values.
All of the calculations of G network can be implemented in its forward() function.
All of the calculation of G network is differentiable.
If I generate a network like G( F(x) ) = y is it possible to backpropagate both
I think the answers below have sufficient information.
On the other hand, If Pytorch publishes more tutorials, especially non-vanilla learning models and mechanisms it will be beneficial in terms of practicality.