theevann
(Evann)
August 29, 2017, 2:37pm
1
Hello !
This code prints an array of nan :
a = Variable(torch.zeros(3,3), requires_grad=True)
b = a.norm()
b.backward()
print a.grad
Have I done anything wrong ? Looks rather like a formula bug β¦
I have found a similar issue here, may be related β¦
In torch.autograd._functions.reduce
class Prod was implemented in a way that it produces nan gradient when zeros value is given.
Beginning with the product of all input, the gradient is calculated by dividing that product by each input entry.
When input entry is zero, this method returns βnanβ gradient.
By replacing backward() function of Prod() class with
if self.dim is None:
input, = self.saved_tensors
zero_loc = (input==0).nonzero()
if zero_loc.dim() == 0:
β¦
albanD
(Alban D)
August 29, 2017, 3:13pm
2
The problem is that you are trying to get the derivative of the square root function at 0. Which is + infinity. The gradient of a is then +infinity * 0 = Nan
.
1 Like
theevann
(Evann)
August 29, 2017, 3:24pm
3
Ok, got it.
I guess the fact that is also arises for b = a.norm(p=1)
is because derivative of abs is not defined in 0.
May i know how to deal with this problem? Is there any version fix this problem?
albanD
(Alban D)
November 8, 2017, 10:22am
5
This has been fixed in master, now the norm return the subgradient with value 0 at 0.
Hi,
How can i find infinity values in my pytorch tensor.
richard
(Richard Zou)
December 14, 2017, 8:13pm
7
You should be able to compare it to inf. Something like:
tensor == float('inf')
thanks
your solution is correct. I solved it in this way
def get_new_weights(weights):
nw=weights.view(-1)
PINFINITY = float('inf')
NINFINITY = -PINFINITY
# if there is -inf --> -1e10, +inf-->max(weights), nan --> 1
nw[nw != nw] = 1
nw[nw == PINFINITY] = max(weights)
nw[nw == NINFINITY] = -1e10
return nw