In Matlab there is a function called vl_nnconv, which returns either the results of a convolution for a forwards pass or the gradients with respect to the input, filters, and biases depending on the arguments. What is the closest equivalent in PyTorch, especially for the backwards version? I’m extending the Conv2d module in a way that isn’t differentiable (that is, it requires approximations to calculate practical gradients as it uses things like argmax).
I’m not sure what you’re looking for?
A way to implement a new operation with a custom backward?
Or a way to access the backward of the regular convolution directly?
The end goal is to implement a new operation with a custom backward, but its new backward utilizes the backward of the regular convolution.
You can find here the instructions on how to build a new operation.
I don’t think we provide direct access to the backward of a given convolution. But based on the parameters, you can simply use the corresponding
ConvTranspose (with same parameters as long as you don’t do fancy stuff with dilated convolutions).
Thanks. Is it feasible to use forward and backward hooks to modify the forward and backward passes; that is, register a forward hook on a module that changes the output it sends to the next module, and register a backwards hook on the same module that changes the gradient it sends to the next module (the one preceding it in the forwards pass)? Or is creating a new module the way to do that?
Reason for asking being that I am trying to do so by creating a new module and applying a function that applies autograd.Function as in the tutorial from the site you mentioned, but the grad_output it receives in the backward function is full of either zeros or infinities, depending on how I implement it.
More specifically, I just added some backwards hooks to check whether requires_grad was true or not, and for almost all modules it was False. The model is set to train(). I am adjusting the VGG16 model. Looking at the implementation, when the hook was set to self.features it returned a reasonable tensor as grad_output, but when I added it to a convolutional layer it returned all 0s with requires_grad equal to False.