I have a question to the requires_grad= option of torch.hann_window(): the window size of Hann window is supposed to be positive integers and therefore the Hann window function is not differentiable. How does the autodiff/autograd work here? I scratched my head for quite some time but could not figure out how the operation is legitimate.
I don’t think the actual tensor creation will be differentiable, but requires_grad would change the output tensor attribute in the same way as other factory methods such as torch.randn work: