Why "loss.backward()" didn't update parameters' gradient?

Hi
I try to use the second different loss function and add it to the original one, but no updating occur in the weights. I change the second loss functions but no changes. Do you think is there any thing wrong? I am running the code on GPU. The first loss is nn.BCELoss() and the second is L1. The result is as same as using just BCNLoss, L1 or other losses does not have effects on the results.

label.fill_(real_label)  
label=label.to(device)
output = netD(fake).view(-1)

# Calculate G's loss based on this output
errG1 = criterion(output, label)


xxx=torch.histc(GaussyMask.squeeze(1).view(-1).cpu(),100, min=0, max=1, out=None)
ddGaussy=xxx/xxx.sum()

xxx1=torch.histc(fake.squeeze(1).view(-1).cpu(),100, min=0, max=1, out=None)
ddFake=xxx1/xxx1.sum()

MSECMBSS=abs(ddGaussy-ddFake).sum()

# Calculate gradients for G adding two losses

errG=errG1+MSECMBSS
errG.backward()
D_G_z2 = output.mean().item()
D_G_z22+=D_G_z2
# Update G
optimizerG.step()
```