Efficient FFT padding


performing an fft-based convolution in 3D requires zero-padding of the input data in 3D and then performing an fftn in all three dimensions. since there is only data in one octant of the input data, the first 1D fft needs to be performed only for half of the data.

I am wondering whether pytorch uses this optimization when i use the s-parameter for extending the input dimensions. Unfortunately, separating the 3D fft into three 1D ffts in python leads to additional overhead.

For the ifft the situation is similar. In this case the output data is only needed in the first octant of the data, so e.g. after the first 1D fft half of the data could be cropped. Since the s-parameter of ifftn seems to crop data before the FFT, I don’t see an efficient way for an implementation.

I’m not sure if this is the right location for this sort of questions or if I should better create an issue for a feature request!?

thanks and best wishes