Back propogation from 2 different positions of network

Hello

I want to construct a network architecture that will be trained with respect to 2 different losses.

One of the losses will be computed with respect to the output(y) of the network.
The other loss will be computed with respect to some variables(w) that are not output.
I have both ground truth labels for y and w variables.

I have a network(F) that is trained successfully. Its input is x and output is w.
I can generate a G network that instantiates F network.
G network does not need any learnable parameters.
G network will take the w outputs of F network and compute y values.
All of the calculations of G network can be implemented in its forward() function.
All of the calculation of G network is differentiable.

If I generate a network like G( F(x) ) = y is it possible to backpropagate both

  • From w to x
  • From y to x

Any code example will be very helpful

thank you

Hello

Let me ask another question

criterion=torch.nn.MSELoss(size_average=True, reduce=True, reduction='mean')
optimizer=torch.optim.Adam(model.parameters(), lr=learning_rate)

outputs_1, outputs_2 = model(inputs_1, inputs_2)
loss_train = criterion(outputs_1, labels_1)
optimizer.zero_grad()
loss_train.backward()
optimizer.step()  

In the code above, how can Pytorch understand that loss_train.backward() operates with respect to outputs_1 not outputs_2 ?

outputs_1 is a tensor. Isn’t it similar to a variable like numpy array?
Or does it carry information about the model architecture?

Hi

I think the answers below have sufficient information.

On the other hand, If Pytorch publishes more tutorials, especially non-vanilla learning models and mechanisms it will be beneficial in terms of practicality.