Gradients of Torch.pow() gives NAN when the base is non-positive values

Hi, am using pytorch lightning to train some model
and i use torch.pow function in my loss function…
then it keeps giving NAN , then i found when computing the gradients of torch.pow() function,
if the base is non-positive, the gradients its nan which makes sense for base 0, but not sure for negative values.

It can be reproduced using this:

a = torch.tensor([-2.011, -0.000001,0,1,200], requires_grad=True)
q=torch.pow(a,1/2.2)
q.backward(a)

I wonder in this case, how do we deal with it?
Anybody has ideas? Thanks @ptrblck

Hi,

Since your exponent is 1/2.2, then the function is not defined for negative bases. Hence the nan gradient.

hmm yeah just realize the issues. thanks

1 Like