 # Zero-padding of `ifft` does not work as expected

Hi:)

I want to calculate the inverse FFT of a truncated input vector,

``````Python 3.8.10 (default, May 19 2021, 18:05:58)
[GCC 7.3.0] :: Anaconda, Inc. on linux
>>> import torch
>>> x = torch.rand(5)
>>> fx = torch.fft.fft(x)
>>> fx
tensor([3.2079+0.0000j, 0.3252+0.3854j, 0.1552+0.6282j, 0.1552-0.6282j, 0.3252-0.3854j])
>>> fx = 0.
>>> fx = 0.
>>> fx
tensor([3.2079+0.0000j, 0.3252+0.3854j, 0.0000+0.0000j, 0.0000+0.0000j, 0.3252-0.3854j])
>>> torch.fft.ifft(fx)
tensor([0.7717+0.j, 0.5352+0.j, 0.4457+0.j, 0.6269+0.j, 0.8284+0.j])
``````

Note that the truncation affects the highest frequency modes and since both, `fx` and `fx`, are set to zero the result of `ifft` is real-valued as expected.

Naively, I would expect that `torch.fft.ifft` gives the very same result when the frequency components are truncated rather than set to zero,

``````>>> torch.fft.ifft(torch.tensor([fx, fx, fx]), n=5)
tensor([0.7717+0.0000j, 0.5811+0.1863j, 0.4905-0.1098j, 0.7277-0.0625j, 0.6370-0.0139j])
``````

but the result is now different and complex.
Obviously, I am missing something, probably, I misunderstand the parameter `n`?

Any ideas?

Hi Avitase!

Pytorch’s `fft` functions, including `ifft()`, pad by simply appending zeros
to the end of the input vector – they don’t insert zeros into the middle.

You can get the behavior I believe you want by using `rfft()` / `irfft()`.
The following illustrates these two points:

``````>>> import torch
>>> torch.__version__
'1.9.0'
>>> x = torch.arange (1, 6).float()**(1.0 / 3.0)
>>> x
tensor([1.0000, 1.2599, 1.4422, 1.5874, 1.7100])
>>> fx = torch.fft.fft (x)
>>> fx
tensor([ 6.9995+0.0000j, -0.5333+0.5133j, -0.4665+0.1265j, -0.4665-0.1265j,
-0.5333-0.5133j])
>>> torch.fft.ifft (torch.tensor ([fx, fx, 0.0, 0.0, fx]))
tensor([1.1866+0.j, 1.1387+0.j, 1.4518+0.j, 1.6932+0.j, 1.5293+0.j])
>>> torch.fft.ifft (torch.tensor ([fx, fx, fx, 0.0, 0.0]))
tensor([1.1866+0.0000j, 1.4159-0.0493j, 1.2952-0.0760j, 1.6112-0.1535j,
1.4905+0.2789j])
>>> torch.fft.ifft (torch.tensor ([fx, fx, fx]), n = 5)
tensor([1.1866+0.0000j, 1.4159-0.0493j, 1.2952-0.0760j, 1.6112-0.1535j,
1.4905+0.2789j])
>>> rfx = torch.fft.rfft (x)
>>> rfx
tensor([ 6.9995+0.0000j, -0.5333+0.5133j, -0.4665+0.1265j])
>>> torch.fft.irfft (torch.tensor ([rfx, rfx]), n = 5)
tensor([1.1866, 1.1387, 1.4518, 1.6932, 1.5293])
>>> torch.fft.irfft (torch.tensor ([rfx, rfx, 0.0]), n = 5)
tensor([1.1866, 1.1387, 1.4518, 1.6932, 1.5293])
``````

Best.

K. Frank

Thanks for the help. I actually do need the complex version and just used the `torch.arange` as an example. I understand your point that `fft` and `ifft` pad the zeros to the end but I am now wondering when this would ever be useful for an inverse Fourier transform? Are there good reasons / is the behaviour that I expected worth making a feature request?

Hi Avitase!

Okay, I think I understand your use case now. Indeed, for the complex
case, `rfft()` won’t work, and, as you noted, naive padding also doesn’t
work.

I suppose you could use `fftshift()`, zero out the ends of the vector,
and restore the required order with `ifftshift()` before applying `ifft()`.

But that would be a little verbose for my taste. I would just do the
necessary index arithmetic to zero out the high-frequency components:

``````>>> fx
tensor([ 6.9995+0.0000j, -0.5333+0.5133j, -0.4665+0.1265j, -0.4665-0.1265j,
-0.5333-0.5133j])
>>> nZero = 2
>>> fxc = fx.clone()
>>> l = len (fxc)
>>> fxc[(l - nZero + 1) // 2 : (l + nZero + 1) // 2] = 0.0
>>> fxc
tensor([ 6.9995+0.0000j, -0.5333+0.5133j,  0.0000+0.0000j,  0.0000+0.0000j,
-0.5333-0.5133j])
>>> torch.fft.ifft (fxc)
tensor([1.1866+0.j, 1.1387+0.j, 1.4518+0.j, 1.6932+0.j, 1.5293+0.j])
``````

Best.

K. Frank

1 Like

Thanking you again for the help. I already did a similar thing with `torch.nn.functional.pad` (I need this to work in N dimensions). Maybe I will write a feature request later today 