I am implementing a modified version of Conv2D and therefore want to understand what exactly torch.nn.functional.conv2d is doing.
I think it internally uses torch.nn.grad.conv2d_weight and torch.nn.grad.conv2d_input. However, I am having a hard time understanding what exactly these functions do. I am also wondering how the gradient of the bias is computed. torch/nn/functional.py only contains a docstring for this function, but no actual code. Where can I find it?
The problem is that I can’t really find the internal implementation of torch.nn.functional.conv2d, so I can’t see how the other functions I mentioned are used and how the bias grad is computed.
Does anybody have more insight into this function?
There were created for the user to have easy access to the backward function of convolutions. But this is not maintained anymore and will get deprecated in the future.
“reuse” as call it directly?
Unfortunately, we don’t really bind them at the moment as it can be tricky to be done in general.
What is your use case for this?
The use case is to compute the backward pass on slightly modified inputs than what the forward pass computed on. So, I am planning to save the changed input in the ctx variable of the forward pass, and would like to pass that input for gradient computation in the backward pass.
If I understand correctly, won’t this mean that both forward and backward pass use the modified input? I want the forward pass to be calculated on the original inputs and the backward pass gradients calculated with the help of modified inputs.
In that case, yes you can use that function.
I would advise copying it into your code. Or the subset you need. As the one in F.grad is going to be removed in future version.