Standard deviation of complex tensors

Hi there,

I experimented with the complex tensor operations introduced in PyTorch 1.7. I was a bit surprised by the following example:

# tested in PyTorch 1.7.1
import torch as pt

X1 = pt.tensor([1.0, -1.0], dtype=pt.cfloat)
print(pt.std(X1, dim=0)) # results in sqrt(2)+0.0j
print(pt.std(X1.real)) # also results in sqrt(2)
print(pt.std(X1)) # causes a runtime error
# RuntimeError: _th_std not supported on CPUType for ComplexFloat

I do not really understand why the third print statement causes the runtime error. First I thought it’s only an issue of higher-dimensional tensors, but it also happens for 1D inputs.

My second question is related to the formula used to compute the standard deviation. Consider the following example:

import torch as pt

X1 = pt.tensor([1.0, -1.0], dtype=pt.cfloat)
X2 = pt.tensor([1.0j, -1.0j], dtype=pt.cfloat)
X3 = pt.tensor([1.0+1.0j, -1.0-1.0j], dtype=pt.cfloat)
print(pt.std(X1, dim=0)) # results in sqrt(2)+0.0j
print(pt.std(X2, dim=0)) # results in sqrt(2)+0.0j
print(pt.std(X3, dim=0)) # results in 2.0+0.0j
print(pt.std(X3.real)) # results in sqrt(2)

Based on this simple example, I guess that the standard deviation in 1D is computed as the standard deviation of the absolute values, something like

pt.sqrt(pt.dot(X, X.conj()))

Is that correct? I couldn’t find it in the documentation, and I wasn’t able to find the actual piece of code computing the std. I would have expected that the imaginary part is removed before computing the standard deviation (is there an application where the above definition is meaningful?).

Thanks and regards,
Andre