# The time complexity of fft function

Hello,

I am curious about the fft built-in function.

Is this function calculate the DFT coefficient directly?

What is the time complexity of fft function if we do not use GPU?

Is this function use divide-and-conquer algorithm for calculating fft?

Thanks,

Hi 성현!

I haven’t actually looked at the code, but the time complexity
should be `n log n`.

After all, the function in question is `torch.fft`, where “fft”
stands for “fast Fourier transform,” which uses what you call
the “divide-and-conquer” algorithm and runs in `n log n`.

It would be false advertising if `torch.fft` instead used the
sft* algorithm.

*) “Slow Fourier transform”

Best regards.

K. Frank

So much thanks for reply

I have one more question.

Can we get the gradient of fft function with respect to input?

I think it is possible, but I want to make sure things.

Hello 성현!

It looks like the gradient is supported. Try:

``````>>> import torch
>>> t = torch.randn (1, 8, 2)
>>> t.requires_grad = True
>>> torch.fft (t, 1)
tensor([[[-2.7232,  3.8741],
[-2.9743, -2.1404],
[ 1.1234,  4.4275],
[ 1.7661,  1.6113],
[-3.6401,  3.6872],
[ 0.0582, -3.4854],
[-1.6034,  0.3976],
[ 0.3413,  0.8839]]], grad_fn=<FftWithSizeBackward>)
``````

(I haven’t actually tried this, but I would imagine that
`FftWithSizeBackward` works correctly.)

Best.

K. Frank

Yes, backward of fft is implemented (and for rfft, ifft, irfft as well).

About the speed, on CUDA, the cufft package is used, on CPU, MKL is used. They are both highly-optimized libraries, so it should be very fast.