Hello there, complete pytoch beginner here (started 5 months ago).
I got the following task:
I have to build an Autoencoder that hast 3 different parts. 2 of those are the encoder that are fed with the same Image. The third part is the decoder.
Now I am really confused about summing up the losses when I have 3 losses where only 1 of them should work on one of the encoder sections.
I use the following rough way of doing this:
def lossFunction_3_(iterationIdx, out1_t,out2_t, out1_t-1, out2_t-1,reconst,image, optimizer):
subLoss = [0,0,0]
reconstLoss = nn.MSELoss(reconst, image)
subLoss[0] = reconstLoss.item()
outLoss1 = L1Loss(out1_t , out1_t-1)
subLoss[1] = outLoss1.item()
outLoss2 = L1Loss(out2_t , out2_t-1)
subLoss[2] = outLoss2.item()
if (iterationIdx % 2 == 0):
(reconstLoss + outLoss1).backward()
else:
(reconstLoss + outLoss2).backward()
optimizer.step()
optimizer.zero_grad()
return sum(subLoss), subLoss
def trainLoop():
out1, out2, reconstruction, loss = net.run()
loss = lossFunction(out1,out2,reconst,image, optimizer)
So my question is:
does it make any difference if i use (reconstLoss +outLoss.item()).backward()? Does it still remember from where the out1 and out2 came and still updates only one of the 2 encoder parts? Or does it just get added to the reconst loss and is propagated over all 3 parts?
I want outLoss only to be used on one of the 2 encoder parts (Depending on iterationIdx) while the reconstruction loss updates all 3 parts.
(using pytoch 0.4.1)