# How to calculate loss related to intermediate feed forward results

Hi,

I have the following problem: I have implemented 7 layers neural network. My Loss function contains 2 parts, the 1st part is the cross entropy of classification problem which is easy to implement and calculate the backward process.

However, for the 2nd part, my loss is related to the output of the 6th layer, which is an intermediate feed forward result.

May I ask how to calculate the loss of the 2nd part, and combine with the 1st part to do BP?

P.Y.

Sum the losses:

``````loss = loss1 + loss2
loss.backward()
``````

Use the output of the sixth layer directly. Don’t put it in an `nn.Sequential` if that’s what you’re doing:

``````def forward(self, x):
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.layer5(x)
l6 = self.layer6(x)
l7 = self.layer7(l6)
loss1 = my_loss_fn(l7)
loss2 = my_loss_fn(l6)
return loss1 + loss2
``````

I think you can also back prop the two losses separately. After you have calculated the two losses, you can do

``````loss1.backward(retain_graph=True) # retain_graph is set to True so that you can back prop second loss
loss2.backward()
``````

Hi, I tried `loss = loss1 + loss2 loss.backward()` However, the loss1 doesn’t decrease after backprop while loss2 can drop. Could you tell me how to solve this problem?