Backpropagating losses

I have written a code where there are 3 losses basically segmentation loss, adversarial loss and discriminator loss and the total loss is something like this:

Ltotal = L seg + λ1 L adv + λ2 L disc

In terms of code the final loss is:

        output = model(x)

        bce = nn.BCELoss().cuda(GPU)

        ## Pixel Wise Loss

        s_loss = bce(output,Variable(ground_truth).cuda(GPU))

        ## Measures Segmenter's ability to fool the discriminator

        adv_loss = bce(critic(fake_img),valid)
        

        ###### Train Discriminator ########

        ## Measures Discriminator's abilty to classify real from generated images

        real_loss = bce(critic(real_img),valid)
        fake_loss = bce(critic(fake_img),fake)
        d_loss = (real_loss + fake_loss) / 2

       

        loss = s_loss + lambda1 * adv_loss + lambda2 * d_loss 

        # training
        model.zero_grad()
        discriminator.zero_grad()

        loss.backward()

I just wanted to know if this is the correct way of interpreting the 3 losses together.

P.S: This is somewhat a modified DCGAN with a segmentation network inplace of the generator

Hello,
The implementation looks correct to me given the provided code.