Torch.var( .. , dim = ..) - returning zero when it shouldn't

Hi,

I need to calculate variances of some numbers that are pretty small and pretty close together, i.e. the variances are pretty small. Unfortunately, using torch.var( ), I get zeroes for the variance, when I’m not supposed to get zeroes, but the problem occurs only when using the dim=... argument.

EDIT: I managed to resolve my actual problem by using DoubleTensors instead of FloatTensors while I was composing this question. However, I thought it might still be worth checking in, why my problem occurs when using the dim=... but not when leaving it out. Why is that so?
See this example:

import torch
my_array = torch.Tensor([-0.008015724,-0.008016450])

torch.var(my_array, dim=0)[0]
#Output: 0
torch.var(my_array)
#Output: 2.6385144069607236e-13
torch.var(my_array.double(), dim=0)[0]
#Output: 2.6385144069607236e-13

Many thanks to anyone who can help me understand this better.

i’m just vaguely wondering as i skim through your question, if this is related to Bessel’s correction that we apply: https://en.wikipedia.org/wiki/Bessel’s_correction