# How to set 'nan' in Tensor to 0

Hi, all

How to set ‘nan’ in Tensor to 0? Now I have a extremely inefficient method:

``````my_tensor_np = my_tensor.cpu().numpy()
my_tensor_np [np.isnan(my_tensor_np )] = 0
my_tensor.copy_(torch.from_numpy(my_tensor_np ).cuda())
``````

But copy tensor between gpu and cpu takes lots of time, so I need a more efficient way.

Anyone can help me? Thanks a lot!

2 Likes

NaN means a value which is undefined or unrepresentable. In most cases it makes no sense to simply set NaNs to zero.

Thank you, iamalbert

A paper I recently read use this trick but implemented in Theano. I want to re-implement their algorithm in PyTorch.

I think np.isnan is a useful function, but torch doesn’t implement it, is there a efficient solution?

Thank you!

It’s simple, `a != a` will give you a ByteTensor, indicating the positions of NaNs

``````>>> b

nan     nan -0.8395
nan     nan     nan
-1.7921     nan  0.1864
[torch.FloatTensor of size 3x3]

>>> b != b

1  1  0
1  1  1
0  1  0
[torch.ByteTensor of size 3x3]
``````

you can use `b[b != b] = 0` to set all NaNs to zero.

41 Likes

Seems not work now.

I tried below code in v0.20:

``````a = torch.Tensor([1,2,3,4,5])
b = 0.0
c = a / b
c != c
``````

got all 0s.

Is there any function like np.nan_to_num?

1 Like

@Ben, that’s because `c` in your example is all `Inf`, not `NaN`. Albert’s suggestion works:

``````a = torch.Tensor([float('NaN'), 1, float('NaN'), 2, 3])
print(a)
a[a != a] = 0
print(a)
``````
5 Likes

torch.isnan() was implemented earlier this year. So now you can set the NaN’s to 0’s using `my_tensor[torch.isnan(my_tensor)] = 0`

Cheers

11 Likes

See discussion in ReLU turns nan into zeros. As of pytorch 4.1 ReLU can’t be used for this anymore though. I’ve asked about it there.

For completness I’ll copy my answer:

1 Like

Thank you, I’ll continue the discussion here. This works both forwards/backwards on CPU:

``````import torch

model = torch.nn.Linear(10,10)
x = torch.ones(10,10).detach()
x[0,0] = x[0,0]+float('NaN')

optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
y = model(x)
y[y!=y] = 0
loss = y.sum()
loss.backward()
optimizer.step()
``````

Could someone verify if it works on GPU?

Also, anyone has insights on the computational cost vs the ol’ ReLU-hack?

1 Like

wonderful!thanks very much.

Modifying tensors in-place can cause issues with backprop. Here is my solution, since this is still ranked highly on google:

``````safe_tensor = torch.where(torch.isnan(my_tensor), torch.zeros_like(my_tensor), my_tensor)
``````
4 Likes

Hi,

From version 1.8.1, torch.nan_to_num — PyTorch 1.8.1 documentation is now available. It replaces `NaN` , positive infinity, and negative infinity values in `input` with the values specified by `nan` , `posinf` , and `neginf` , respectively.