Backprop at different stages

I have two models Model1 and Model2 stacked one upon another.

Input of Model1 is input1 and of Model2 is the output of Model1 concatenated with input1 as below:

output1 = Model1(input1)
input2 = torch.cat([output1,input1], dim =1)
output2 = Model2(input2)

Can I backprop twice in this stacked network at each stage as below:

output1 = Model1(input1)
loss1 = cal_loss1(output1)
loss1.backward()
input2 = torch.cat([output1,input1], dim =1)
output2 = Model2(input2)
loss2 = cal_loss2(output2)
loss2.backward()

Thank you.

I think you may try loss1.backward( retain_graph=True ) to allow multiple times of backprop.