I want to implement my custom convolution function using nn.functional.conv2d because forward & backward nn.functional.conv2d’s calculation is optimized in GPU.
Specifically, I want to modify backward method in nn.functional.conv2d.
After calculating gradient (e.g. grad_input, grad_weight), for example, I want to return output with exp():
return grad_input.exp(), grad_weight.exp()
Well, you can rewrite the backward part thus it looks complicated for such a simple change.
The simplest way is to modify in-place weights after they are computed.
Basically, as you have a method to clip gradients, just design another one which applies exp if the weight is a convolution. Then, you just have to use it between loss.backward and optimizer step.
Consider that overwriting pytorch is not forward-compatible so the option of modifying gradients before backpropagating seems more secure for me (and compatible with further versions of pytorch), in fact they developed those tools not to modify the core.