Hi,
I have a question on how to update overlapping parameters using different losses. For example,
hidden = encoder(imgs)
reconstructed = decoder(hidden)
prediction = classifier(hidden)
optimizer1 = Adam(encoder.parameters())
optimizer2 = Adam(decoder.parameters())
optimizer3 = Adam(classifer.parameters())
loss1 = Loss1(imgs, reconstructed)
loss2 = Loss2(prediction, labels)
In this case, I’d like to
- minimise loss1 and only update the parameters of encoder and decoder.
- minimise loss2 and only update the parameters of encoder and classifer.
- maximise loss2 and only update the parameters of encoder.
I tried to call backward() for each loss separately, but I’ve got the following error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [64, 32, 7, 7]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
I’m not sure if it’s the correct way. Or do I need to sum the loss and call backward() only once?
Thanks in advance for any help!