Custom backward step for convolutions

Hello, I’m trying the way the inputs of the backward pass for different layers and the way the output gradients are saved are saved. In short I managed to write a backward computation for a linear layer that computes the gradient only for a subset of the weights, but I can’t find any example of the backward pass implementation for a convolutional layer. Alternatively, is there a way to call the convolutional backward function inside my backward?

Hey!

I think pytorch/test/test_flop_counter.py at d3839b624b5f6451a13bd9b5ecbbce4c2a9b1db6 · pytorch/pytorch · GitHub contains an implementation of convolution backward as a set of convolution ops.
It doesn’t cover all the arguments of a real convolution but is a good base for what you want I think

Yes that was a good way to start but in the end I found out about torch.nn.grad.conv2_input and torch.nn.grad.conv2_weight and that worked even better! Thank you for your answer though

Hey!

the torch.nn.grad functions are not necessarily accurate so I wouldn’t rely on them for anything serious (that’s why I didn’t suggest them here).
But if they work for you, that’s good.

Hi! Sorry but can you clarify what you mean by not being accurate? I thought they replicated the backward behavior of a conv layer?

They are not used much and so not very carefully reviewed. We had quite a few issues about it before: Issues · pytorch/pytorch · GitHub

1 Like

I see, thank you, I’ll evaluate accordingly whether they suit my use… I think provisionally they’ll stay before I move to something more reliable