Sum of errors and backward() is same with backward() for each error and sum?

this is code in pytorch DCGAN example.

    output = netD(inputv)
    errD_real = criterion(output, labelv)
    errD_real.backward()
    D_x = output.data.mean()

    # train with fake
    noise.resize_(batch_size, nz, 1, 1).normal_(0, 1)
    noisev = Variable(noise)
    fake = netG(noisev)
    labelv = Variable(label.fill_(fake_label))
    output = netD(fake.detach())
    errD_fake = criterion(output, labelv)
    errD_fake.backward()
    D_G_z1 = output.data.mean()
    errD = errD_real + errD_fake
    optimizerD.step()

is this can be changed like this?
err = errD_real + errD_fake
err.backward()

I mean “sum of errors and backward() is same with backward() for each error and sum”?

Hi,

Yes it is the same.

Any difference? I think there’s a speed difference.

Running a single .backward() on the summed loss is going to be faster if you work with small graphs. If you work with full cnns, I don’t think you will see any significant difference.

1 Like