What does grad_fn = DivBackward0 represent?
I have two losses:
L_c -> tensor(0.2337, device='cuda:0', dtype=torch.float64)
L_d -> tensor(1.8348, device='cuda:0', grad_fn=<DivBackward0>)
I want to combine them as:
L = L_d + 0.5 * L_c
optimizer.zero_grad()
L.backward()
optimizer.step()
Does the fact that one has DivBackward0 and other doesn’t cause an issue in the backprop?