# How can a tensor without NaNs and Infs have -inf mean and inf standard deviation?

Hi, I have a function

``````def tensorStory(name, tnsr):
print (f"{name}: {tnsr.shape}, dtype: {tnsr.dtype}, device: {tnsr.device}, " + \
f"has {torch.isnan(tnsr).sum().item()} NaNs and {torch.isinf(tnsr).sum().item()} Infs " +\
f"avg {torch.mean(tnsr)} dev {torch.std(tnsr)}")
``````

Which gives me the following:

``````long_matrix: torch.Size([315, 4]), dtype: torch.float32, device: cpu,  has grad, has 0 NaNs and 0 Infs avg -inf dev inf
``````

How is it even possible???
long_matrix is a result of matrix multiplication involving a sparse COO matrix.

Hi Alex!

Presumably your tensor in question contains elements that are close
enough to being outside of the range of a `float32` that the computations
for `mean()` and `dev()` overflow.

Consider:

``````>>> import torch
>>> torch.__version__
'2.4.0'
>>> def tensorStory(name, tnsr):
...     print (f"{name}: {tnsr.shape}, dtype: {tnsr.dtype}, device: {tnsr.device}, " + \
...             f"has {torch.isnan(tnsr).sum().item()} NaNs and {torch.isinf(tnsr).sum().item()} Infs " +\
...             f"avg {torch.mean(tnsr)} dev {torch.std(tnsr)}")
...
>>> tFloat = -2.e38 * torch.ones (2, requires_grad = True)
>>> tFloat
>>> tensorStory ('tFloat', tFloat)
tFloat: torch.Size([2]), dtype: torch.float32, device: cpu,  has grad, has 0 NaNs and 0 Infs avg -inf dev inf
``````

Best.

K. Frank