Autograd graph computation interrupted

Hello,
l have a numpy layer within my network which is not concerned by back-propagation. When l put this numpy_layer(.) between the layers of my network, the backpropagation algorithm updates the weights of layer4 and layer5 but not the layers prior to numpy_layer(.). I removed the numpy_layer(.) and the backpropagation indeed works for all the layers. What’s wrong ?
Here is my code

def forward(x):
    x=layer1(x)
    x=layer2(x)
    x=layer3(x)
    x=numpy_layer(x) # not concerned by backpropagation
    x=layer4(x)
    x=layer5(x)
    return x

I think that the reason why backward doesn’t achieve the three first layers, is that my computation graph is interrupted.
One of the solution to avoid that is to implement the backward function manually since Autograd cannot create it automatically once you leave PyTorch.

l would like to backpropagate from layer4 to layer3 without backpropagating through numpy_layer(x) during the backward process.
How to write correctly numpy_layer(x) so that it doesn’t interrupt the computation graph.

I’m thinking about the following solution:
writing an autograd class for numpy_layer() without any update in the backward() :

class  numpy_layer(torch.autograd.Function):
    def forward(self,x):
           x=x.cpu().data.numpy()
           x=my_numpy(x)# numpy operations
          x=torch_from_numpy(x)
        return x
    def backward(grad_output):
          # nothing 
        

Is this implementation correct ?
I am agnostic about it because, l don’t see clearly how can l backpropagate the weights from layer 4 to 3 without passing/updating numpy_layer().

The shape of my layers is as follow :

  x=layer1(x) # shape : x \in R^{n}
  x=layer2(x) # shape : x \in R^{n}
  x=layer3(x) # shape : x \in R^{n}
  x=numpy_function(x) # not concerned by backpropagation, shape : x \in R^{m}, where n << m
  x=layer4(x) # shape : x \in R^{m}, where n << m
  x=layer5(x) # shape : x \in R^{m}, where n << m

my numpy function just makes an expansion (concatenation with respect to some operations) of x from x \in R^{n} to x \in R^{m} where n << m

Thank you

Hi DeepLearner17,
Please go through the link for a tutorial on creating a custom autograd function.
The example you have provided need to return a grad_input which would be required for layer3 to calculate its gradient. In the sense that, grad_output would be in R^{m}. You will need to transform it to R^{n} and return that for gradient to pass through.

Alternatively, Pytorch was built to mimic Numpy to a great degree, so almost any method written using numpy can be converted to their equivalent in Pytorch, which will enable the gradient to flow automatically.