Multiple loss backward twice

Hello ,

My loss function contains three parts loss = p1* loss1+p2* loss2+p3* loss3
I want to first get the gradient of the parameters of loss1, loss2, loss3 individually (for some purpose) and then update the net parameters by optimizing the total loss, i.e.,

loss1.backward()
(l1 is a layer in my network)
loss1.l1.weight.grad

loss2.backward()
loss2.l1.weight.grad

loss3.backward()
loss3.l1.weight.grad

no optimization involved up to now and I just want to know the gradient of each loss function. and then I want to update the parameters of the network by
opt.zero_grad()
loss.backward()
opt.step()

However, it gives me the error message and ask me to use retain_graph=True when I first call backward() in each individual loss function.

There will be no bug if I use retain_graph=True, however, is this the correct way to solve the problem? Is there any simpler way for me to get the gradient of each loss function and then update the parameters by optimizing the total loss function? Thank you very much!

Hy @Zecheng_Zhang,

A better way to this would be.

loss = loss1 + loss2 + loss3

But I don’t think this link here makes any sense.

and the rest because loss won’t have l1 as it’s attribute.

Hi, Usama,

The p1, p2 and p3 are the weights for three losses. I need to update the weights everytime when I calculate the gradient of three losses. Thanks

I see so you’ve loss function defined custom with weights. In that case the above statement should help.