Problem with error backpropagation

Although the program is running without errors, the result is not intended one. Got stuck for the past two days. Any help will be greatly appreciated.

        optimizer_G.zero_grad()
        for res_idx in range(2):
            a = someloss
            b = someloss
            c = someloss
            d = someloss
            
            if res_idx is 0:
                loss_G = a + b + c + d
                
            else:
                loss_G += a + b + c + d
        loss_G.backward()
        optimizer_G.step()

The losses a, b, c, d changes as loop continues. Is this the correct way to define the loss (since loss parameters are overwritten) and updating the parameters?

Could you explain the issue a bit more?
If you are recalculating the a, b, c, d losses through someloss, it’s expected that they will change.
How is someloss defined and what would be the expected behavior?