How to train domain adaptive model

output_source = model(source_image)
loss_s = crossentropyloss(output_source,0)
loss_s.backward()
output_target = model(target_image)
loss_t = crossentropyloss(output_target,1)
loss_t.backward()

or

output_source = model(source_image)
loss_s = crossentropyloss(output_source,0)
output_target = model(target_image)
loss_t = crossentropyloss(output_target,1)
loss = loss_s + loss_t
loss.backward()

which one is right approach?
thank you in advance

Both approaches should work.
The first one would use less memory and more compute, since you would be using two backward calls (more compute), but Autograd will be able to delete the intermediate tensors from the first forward pass, as they are not needed anymore (gradients were already computed using them).
The latter approach would store both computation graphs and would thus use more memory.

1 Like