Could someone please share his WGAN_GP loss?

I can’t understand my network loss, D loss is sometimes positive and decreasing slowly, suddenly it get negative and then gradient is zero and D network can’t discriminate between samples.

Could someone please share his network loss or at least explain how it should act in normal behavior?

Does this help?

thanks, by this repository doesn’t include the loss.

This Wasserstein notebook has a variety of different losses, so the loss is marked by if-blocks.

Best regards

Thomas

@tom I’m testing your code, but this is the first time my network run into nan.

I backed your GPloss like this, it might be a mistake I’m doing.

def calc_gradient_penalty(netD, real_data, fake_data):
    onesided = True

    if onesided:
        clip_fn = lambda x: x.clamp(max=0)
    else:
        clip_fn = lambda x: x

    alpha = torch.FloatTensor(minibatch_size, 1)
    alpha.uniform_()
    alpha = alpha.expand(minibatch_size, int(real_data.nelement() / minibatch_size)).contiguous().view(minibatch_size,
                                                                                                       1, 240, 320)
    alpha = alpha.cuda(GpuId) if Cuda else alpha

    interp_points = (alpha * real_data.data + (1 - alpha) * fake_data.data)
    if Cuda:
        interp_points = Variable(interp_points, requires_grad=True)

    errD_interp_vec = netD(interp_points)
    errD_gradient, = torch.autograd.grad(errD_interp_vec.sum(), interp_points, create_graph=True)

    lip_est = (errD_gradient ** 2).view(minibatch_size, -1).sum(1) ** 0.5  # updated: bug fix: added **0.5
    lip_loss = Lambda * (clip_fn(1.0 - lip_est) ** 2).mean(0).view(1)
    print('G ', lip_loss.data[0])
    return lip_loss