Torch.fft..cound not understand the argument : signal_ndim

I am computing loss using the fft, i have worked with numpy.fft.fft before. I am not able to understand the use of “signal_ndim”

for example : I created a numpy array
x = np.arange(30)
array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29])

then i also made a torch tensor of it and pushed on gpu
y = torch.tensor(x, requires_grad=True, dtype=torch.float64, device=device)
y = tensor([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13.,
14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24., 25., 26., 27.,
28., 29.], device=‘cuda:0’, dtype=torch.float64, requires_grad=True)

now when i do numpy fft
i get this : np.fft.fft(x)
output : array([435.+0.00000000e+00j, -15.+1.42715467e+02j, -15.+7.05694516e+01j,
-15.+4.61652531e+01j, -15.+3.36905516e+01j, -15.+2.59807621e+01j,
-15.+2.06457288e+01j, -15.+1.66591877e+01j, -15.+1.35060607e+01j,
-15.+1.08981379e+01j, -15.+8.66025404e+00j, -15.+6.67843028e+00j,
-15.+4.87379544e+00j, -15.+3.18834843e+00j, -15.+1.57656353e+00j,
-15.+1.77635684e-15j, -15.-1.57656353e+00j, -15.-3.18834843e+00j,
-15.-4.87379544e+00j, -15.-6.67843028e+00j, -15.-8.66025404e+00j,
-15.-1.08981379e+01j, -15.-1.35060607e+01j, -15.-1.66591877e+01j,
-15.-2.06457288e+01j, -15.-2.59807621e+01j, -15.-3.36905516e+01j,
-15.-4.61652531e+01j, -15.-7.05694516e+01j, -15.-1.42715467e+02j])

but when i do torch.fft(y,1) , it gives runtime error
RuntimeError: Given signal_ndim=1, expected an input tensor of at least 2D (complex input adds an extra dimension), but got input=torch.cuda.DoubleTensor[30]

so i reshaped it to torch.reshape(y, (15,2)), now torch.fft(y,1) output
tensor([[210.0000, 225.0000],
[-85.5695, 55.5695],
[-48.6906, 18.6906],
[-35.6457, 5.6457],
[-28.5061, -1.4939],
[-23.6603, -6.3397],
[-19.8738, -10.1262],
[-16.5766, -13.4234],
[-13.4234, -16.5766],
[-10.1262, -19.8738],
[ -6.3397, -23.6603],
[ -1.4939, -28.5061],
[ 5.6457, -35.6457],
[ 18.6906, -48.6906],
[ 55.5695, -85.5695]], device=‘cuda:0’, dtype=torch.float64,
grad_fn=)

if the pass the same matrix to numpy i get different output.
array([[ 1.+0.j, -1.+0.j],
[ 5.+0.j, -1.+0.j],
[ 9.+0.j, -1.+0.j],
[13.+0.j, -1.+0.j],
[17.+0.j, -1.+0.j],
[21.+0.j, -1.+0.j],
[25.+0.j, -1.+0.j],
[29.+0.j, -1.+0.j],
[33.+0.j, -1.+0.j],
[37.+0.j, -1.+0.j],
[41.+0.j, -1.+0.j],
[45.+0.j, -1.+0.j],
[49.+0.j, -1.+0.j],
[53.+0.j, -1.+0.j],
[57.+0.j, -1.+0.j]])

can someone please explain this to me…

If you want to use the old torch.fft function, the input should have size 2 along the last dimension, where x[…,0] is your real component, and x[…,1] is the imaginary one. For the case of real inputs, you should use something like this:

torch.stack((x, torch.zeros_like(x)), dim=-1)

The signal_ndim argument selects the 1D, 2D, or 3D fft. In the current torch.fft module, you can use fft, fft2, or fftn instead. This newer fft module also supports complex inputs, so there is no need to pass real and imaginary components as separate channels.