Zero padding prior to torch.fft

What is the most efficient way of zero padding a multi dimensional signal before using torch.fft?

the respective module does not consider this argument so I am wondering if placing said signal into a zero tensor at certain positions is the most suitable way of mimicking zero-padding

Can somebody share their thoughts on this please?

1 Like

Placing the tensor into another tensor initialized with zeros should work.
However, the more straightforward way would probably be to use F.pad instead.
Would that work for you?

1 Like

thank you I will try that out

1 Like