Autograd on the window size of Hann window of STFT

Hello,

I have a question to the requires_grad= option of torch.hann_window(): the window size of Hann window is supposed to be positive integers and therefore the Hann window function is not differentiable. How does the autodiff/autograd work here? I scratched my head for quite some time but could not figure out how the operation is legitimate.

Many thanks in advance!

I don’t think the actual tensor creation will be differentiable, but requires_grad would change the output tensor attribute in the same way as other factory methods such as torch.randn work:

torch.hann_window(8)
# > tensor([0.0000, 0.1464, 0.5000, 0.8536, 1.0000, 0.8536, 0.5000, 0.1464])
torch.hann_window(8, requires_grad=True)
# > tensor([0.0000, 0.1464, 0.5000, 0.8536, 1.0000, 0.8536, 0.5000, 0.1464],
#         requires_grad=True)

torch.randn(5)
# > tensor([-2.0748,  0.8152, -1.1281,  0.8386, -0.4471])
torch.randn(5, requires_grad=True)
# > tensor([-0.5538, -0.8776, -0.5635,  0.5434, -0.8192], requires_grad=True)

Many thanks for the clarification!