I’ve been trying to implement an extension to autograd for index-based pooling (rather than spatial pooling). However, when I implement
backward() and then try to call it in my tests, I get the error:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Now that I check the PyTorch built-ins too, I see the same error pop up.
import torch x = torch.rand(2,10,5) pool = torch.nn.MaxPool1d(5) out = pool(x) out.backward() >>>RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Why is there no gradient? I thought that the differentiability in max-pooling is achieved by backpropagating the gradient to each max element from the input, and none elsewhere. Can someone please explain how this works in PyTorch?