Backpropagating w.r.t detached tensor

In my code, I have a class with a forward function that takes the output of a conv2d as input x. Thus, the input x will be attached to the computational graph and will require gradient. However, I have to do some calculations in the forward function that inevitably requires x to be transformed to numpy array and thus detached from the computational graph.

    def forward(self, x):
        x = x.detach().cpu().numpy()

The problem is that I need to pass the gradient w.r.t. x to the previous conv2d layer. Will this be possible if I make a custom backward path as specified in [Custom Autograd] even though x is detached?(Extending PyTorch — PyTorch 2.2 documentation)

Yes, you’d want to use a custom autograd Function here for the part of the computation you are computing with numpy but still want to backward through. Automatic differentiation package - torch.autograd — PyTorch 2.2 documentation

Thanks for the answer!