NaN in torch.norm if input is zero

Hello !

This code prints an array of nan :cry: :

a = Variable(torch.zeros(3,3), requires_grad=True)
b = a.norm()
b.backward()
print a.grad

Have I done anything wrong ? Looks rather like a formula bug …

I have found a similar issue here, may be related …

The problem is that you are trying to get the derivative of the square root function at 0. Which is + infinity. The gradient of a is then +infinity * 0 = Nan.

1 Like

Ok, got it.

I guess the fact that is also arises for b = a.norm(p=1) is because derivative of abs is not defined in 0.

May i know how to deal with this problem? Is there any version fix this problem?

This has been fixed in master, now the norm return the subgradient with value 0 at 0.

Hi,
How can i find infinity values in my pytorch tensor.

You should be able to compare it to inf. Something like:

tensor == float('inf')

thanks
your solution is correct. I solved it in this way

def get_new_weights(weights):
	nw=weights.view(-1)
	PINFINITY = float('inf')
	NINFINITY = -PINFINITY
	# if there is -inf --> -1e10, +inf-->max(weights), nan --> 1
	nw[nw != nw] = 1
	nw[nw == PINFINITY] = max(weights)
	nw[nw == NINFINITY] = -1e10
	return nw