Overriding nn.functional.conv2d

Hi :slight_smile:

I want to implement my custom convolution function using nn.functional.conv2d because forward & backward nn.functional.conv2d’s calculation is optimized in GPU.

Specifically, I want to modify backward method in nn.functional.conv2d.

After calculating gradient (e.g. grad_input, grad_weight), for example, I want to return output with exp():
return grad_input.exp(), grad_weight.exp()

Is there any method?

Thanks

Well, you can rewrite the backward part thus it looks complicated for such a simple change.
The simplest way is to modify in-place weights after they are computed.
Basically, as you have a method to clip gradients, just design another one which applies exp if the weight is a convolution. Then, you just have to use it between loss.backward and optimizer step.

1 Like

How can I re-write backward part of built in functions(e.g. nn.functional.conv2d)?

Hard question for me, it’s c++ based. Anyway I’m not really familiarized with torch core. @ptrblck may help you. Looks like https://github.com/pytorch/pytorch/blob/f8086845aacdb6e5c3e46313899105255a5a9822/torch/nn/grad.py may be relevant but I don’t really know where is it exactly called

Consider that overwriting pytorch is not forward-compatible so the option of modifying gradients before backpropagating seems more secure for me (and compatible with further versions of pytorch), in fact they developed those tools not to modify the core.