Call backward () twice

I just encountered the exactly same problem.

In my case, my two networks can both be trained independently with different loss functions. But putting them together by calling loss1.backward() and loss2.backward() consecutively will result in “inplace operation” errors. And the info provided by pytorch isn’t helpful at all.

I fixed the error by making sure the input tensors of the two networks are completely detached. Based on my knowledge, the input tensors will be included in the computation graph by default, and loss.backward() function will remove all tensors that are related to the loss from the graph. That means, if an input tensor is used by both the networks, it will not be visible to loss2’s computation graph after pytorch runs loss1.backward(), which may be identified as an “inplace operation” by the debugging tools.

To detach the tensor, simply organize your code like:

x.clone().detach()
loss1 = loss_func(y1, net1(x))
loss2 = loss_func(y2, net2(x))
opti1.zero_grad()
opti2.zero_grad()
loss1.backward()
loss2.backward()
opti1.step()
opti2.step()

I’m using torch version 1.13. Please let me know if the solution is not useful.