I am curious about the fft built-in function.
Is this function calculate the DFT coefficient directly?
What is the time complexity of fft function if we do not use GPU?
Is this function use divide-and-conquer algorithm for calculating fft?
I haven’t actually looked at the code, but the time complexity
n log n.
After all, the function in question is
torch.fft, where “fft”
stands for “fast Fourier transform,” which uses what you call
the “divide-and-conquer” algorithm and runs in
n log n.
It would be false advertising if
torch.fft instead used the
*) “Slow Fourier transform”
So much thanks for reply
I have one more question.
Can we get the gradient of fft function with respect to input?
I think it is possible, but I want to make sure things.
It looks like the gradient is supported. Try:
>>> import torch
>>> t = torch.randn (1, 8, 2)
>>> t.requires_grad = True
>>> torch.fft (t, 1)
[ 1.1234, 4.4275],
[ 1.7661, 1.6113],
[ 0.0582, -3.4854],
[ 0.3413, 0.8839]]], grad_fn=<FftWithSizeBackward>)
(I haven’t actually tried this, but I would imagine that
FftWithSizeBackward works correctly.)
Yes, backward of fft is implemented (and for rfft, ifft, irfft as well).
About the speed, on CUDA, the cufft package is used, on CPU, MKL is used. They are both highly-optimized libraries, so it should be very fast.