What if I don't know how to write a `backward()` when writing an extension for pytorch?

both C extension and Numpy extention for pytorch, We have to write both forward and backward functions.

Here is a silly question but I have to ask, we have autograd which automatically calcuate gradients for us, and we only need to worry about forward(). but why here we have to write backward()?

What if I only know how to write a forward function, especially when dealing with complex layers like convolution and RNN? see below the official numpy extension example

What shall I do when I have no idea how to write complex backward function for convolution as below?

    def backward(self, grad_output):
        numpy_go = grad_output.numpy()
        result = irfft2(numpy_go)
        return torch.FloatTensor(result)

    def backward(self, grad_output):
        input, filter = self.saved_tensors
        grad_input = convolve2d(grad_output.numpy(), filter.t().numpy(), mode='full')
        grad_filter = convolve2d(input.numpy(), grad_output.numpy(), mode='valid')
        return torch.FloatTensor(grad_input), torch.FloatTensor(grad_filter)

then how can I proceed? what should I do to move forward to create an extension?
Thanks a lot!

Please could you avoid tagging multiple people like that. We are reading the forum, this just creates a lot a noise.

You have to write a backward() function when you are not working with Variables.
With Variables, you only need to write the forward because some people implemented the backward in pytorch.
If you introduce new operations that are not supported by Variables, you will have to write the backward.

1 Like

sorry, I won’t tag on multiple people again. Thanks for the reply, the relation between backward() and Variables are helpful