grad_fn=<DivBackward0>

What does grad_fn = DivBackward0 represent?
I have two losses:

L_c -> tensor(0.2337, device='cuda:0', dtype=torch.float64)

L_d -> tensor(1.8348, device='cuda:0', grad_fn=<DivBackward0>)

I want to combine them as:

L = L_d + 0.5 * L_c
optimizer.zero_grad()
L.backward()
optimizer.step()

Does the fact that one has DivBackward0 and other doesn’t cause an issue in the backprop?

L_c doesn’t have a connected computation graph, as there is no gradient function present.
i.e., optimizer.step() would try to minimize only L_d.