This code prints an array of nan :
a = Variable(torch.zeros(3,3), requires_grad=True)
b = a.norm()
Have I done anything wrong ? Looks rather like a formula bug …
I have found a similar issue here, may be related …
The problem is that you are trying to get the derivative of the square root function at 0. Which is + infinity. The gradient of a is then
+infinity * 0 = Nan.
Ok, got it.
I guess the fact that is also arises for
b = a.norm(p=1) is because derivative of abs is not defined in 0.
May i know how to deal with this problem? Is there any version fix this problem?
This has been fixed in master, now the norm return the subgradient with value 0 at 0.
How can i find infinity values in my pytorch tensor.
You should be able to compare it to inf. Something like:
tensor == float('inf')
your solution is correct. I solved it in this way
PINFINITY = float('inf')
NINFINITY = -PINFINITY
# if there is -inf --> -1e10, +inf-->max(weights), nan --> 1
nw[nw != nw] = 1
nw[nw == PINFINITY] = max(weights)
nw[nw == NINFINITY] = -1e10