ReLU turns nan into zeros

Assuming we are in the unfortunate case of having a nan valued Variable. If it is passed through a ReLU activation the output is a zero. Is that the desired behaviour? (Other activation functions return nan instead as I would have expected)

import torch
from torch.autograd import Variable
import torch.nn.functional as F

A = Variable(torch.zeros(1))/0 # nan
print(F.relu(A))       # 0
print(F.elu(A))        # nan
print(F.leaky_relu(A)) # nan
print(F.sigmoid(A))    # nan

yes this is desired/expected. doing a max(x, nan) will ignore the nan and pass through x.

1 Like

Good to know, thanks.

As of pytorch 4.1 this is not the case anymore. relu(NaN) == NaN

In [1]: import torch

In [2]: x = torch.ones(1).float()+float('NaN')

In [3]: x
Out[3]: tensor([    nan])

In [4]: x.relu()
Out[4]: tensor([    nan])

I’ve previously happily (ab)used the previous behaviour. Is there a suggested new method on How to set ‘nan’ in Tensor to 0?

I assume the suggested method my_tensor[torch.isnan(my_tensor)] = 0. will cause problems for GPU and have a high memory cost. Is there some other method?

since NaN != NaN you could do my_tensor[my_tensor!=my_tensor] = 0

1 Like

Edit: I’ll continue the discussion about in that thread