falmasri
(Falmasri)
February 26, 2018, 8:01am
1
I can’t understand my network loss, D loss is sometimes positive and decreasing slowly, suddenly it get negative and then gradient is zero and D network can’t discriminate between samples.
Could someone please share his network loss or at least explain how it should act in normal behavior?
falmasri
(Falmasri)
February 26, 2018, 10:16am
3
thanks, by this repository doesn’t include the loss.
tom
(Thomas V)
February 26, 2018, 10:25am
4
This Wasserstein notebook has a variety of different losses, so the loss is marked by if-blocks.
Best regards
Thomas
falmasri
(Falmasri)
February 26, 2018, 2:37pm
5
@tom I’m testing your code, but this is the first time my network run into nan.
I backed your GPloss like this, it might be a mistake I’m doing.
def calc_gradient_penalty(netD, real_data, fake_data):
onesided = True
if onesided:
clip_fn = lambda x: x.clamp(max=0)
else:
clip_fn = lambda x: x
alpha = torch.FloatTensor(minibatch_size, 1)
alpha.uniform_()
alpha = alpha.expand(minibatch_size, int(real_data.nelement() / minibatch_size)).contiguous().view(minibatch_size,
1, 240, 320)
alpha = alpha.cuda(GpuId) if Cuda else alpha
interp_points = (alpha * real_data.data + (1 - alpha) * fake_data.data)
if Cuda:
interp_points = Variable(interp_points, requires_grad=True)
errD_interp_vec = netD(interp_points)
errD_gradient, = torch.autograd.grad(errD_interp_vec.sum(), interp_points, create_graph=True)
lip_est = (errD_gradient ** 2).view(minibatch_size, -1).sum(1) ** 0.5 # updated: bug fix: added **0.5
lip_loss = Lambda * (clip_fn(1.0 - lip_est) ** 2).mean(0).view(1)
print('G ', lip_loss.data[0])
return lip_loss