Hi, i’ve looked through posts here and they are either too specific for my purpose or linked to deleted git repositories

What I would like to do is to start out with just a normal conv2d forward and backward, so that i can play with passing and editing the gradients.
I’d like to start out with a backward function as if we implemented conv2d backward ourselves, and then edit it to use approximately calculated gradients

For the forward pass i know that it needs to use unfold to get the sliding window, but how can i make the backward?
Thanks in advance!

Calculating backward propagation manually for convolutional layers is a very complex task, especially without any autograd system. You would be spending a lot of time doing the calculations (even for a simple network/layer).
You can try to implement your own toy autograd system if you really want complete control ( although I think even PyTorch allows it ? ), which is not that complex if you understand the underlying principles.

what i really want is to manually do backprop steps because i am trying to do a proof of concept approximation of gradients, but that’d be MUCH easier, if i had a “normal” conv2d layer backward code to just edit
therefore the practical performance isn’t key
so i’d appreciate if anyone can help me there

I don’t think there is exactly a code where the back-propagation step happens. There is no backprop code, or any specific code to do backprop for conv2d as well. There are just tensors and the Tensor class which handles the back propagation on any operation that has been done on a tensor. I would suggest exploring the Tensor class, or read chapter 13 of Grokking Deep Learning (by Andrew .D Trask) for a better explanation.