Hi,
Yes that would work.
By the way, a more efficient way to do it is:
loss1 = nn.L1Loss(B, A)
loss2 = nn.L1Loss(C, A)
final_loss = loss1*1 + loss2*0.5
# Don't forget the optimizer.zero_grad() somewhere before the backward
final_loss.backward()
optimizer.step()