sh0416
(Seonghyeon Lee)
June 14, 2019, 12:18pm
#1
Hello,

I am curious about the fft built-in function.

Is this function calculate the DFT coefficient directly?

What is the time complexity of fft function if we do not use GPU?

Is this function use divide-and-conquer algorithm for calculating fft?

Thanks,

KFrank
(K. Frank)
June 14, 2019, 3:25pm
#2
Hi 성현!

I haven’t actually looked at the code, but the time complexity
should be `n log n`

.

After all, the function in question is `torch.fft`

, where “fft”
stands for “fast Fourier transform,” which uses what you call
the “divide-and-conquer” algorithm and runs in `n log n`

.

It would be false advertising if `torch.fft`

instead used the
sft* algorithm.

*) “Slow Fourier transform”

Best regards.

K. Frank

sh0416
(Seonghyeon Lee)
June 17, 2019, 4:14am
#3
So much thanks for reply

I have one more question.

Can we get the gradient of fft function with respect to input?

I think it is possible, but I want to make sure things.

KFrank
(K. Frank)
June 17, 2019, 1:14pm
#4
Hello 성현!

sh0416:

…
Can we get the gradient of fft function with respect to input?

I think it is possible, but I want to make sure things.

It looks like the gradient is supported. Try:

```
>>> import torch
>>> t = torch.randn (1, 8, 2)
>>> t.requires_grad = True
>>> torch.fft (t, 1)
tensor([[[-2.7232, 3.8741],
[-2.9743, -2.1404],
[ 1.1234, 4.4275],
[ 1.7661, 1.6113],
[-3.6401, 3.6872],
[ 0.0582, -3.4854],
[-1.6034, 0.3976],
[ 0.3413, 0.8839]]], grad_fn=<FftWithSizeBackward>)
```

(I haven’t actually tried this, but I would imagine that
`FftWithSizeBackward`

works correctly.)

Best.

K. Frank

SimonW
(Simon Wang)
June 17, 2019, 8:38pm
#5
Yes, backward of fft is implemented (and for rfft, ifft, irfft as well).

SimonW
(Simon Wang)
June 17, 2019, 8:39pm
#6
About the speed, on CUDA, the cufft package is used, on CPU, MKL is used. They are both highly-optimized libraries, so it should be very fast.