How to preserve autograd of tensor after .detach() and processing it?

Hi, thank you for replying!
In my work , I want to change the output of the conv2d layer before putting it through another conv2d layer. But I had to use a nested for loop for this process this output and it takes me too much time. So I have to convert it into numpy and turn the loop into numba to reduce the processing time. Then I can convert the T_numpy back to tensor and use this new tensor to continue in the next conv2d layer.
I saw in here that Mr @albanD stated that:

People not very familiar with requires_grad and cpu/gpu Tensors might go back and forth with numpy. For example doing pytorch → numpy → pytorch and backward on the last Tensor. This will backward without issue but not all the way to the first part of the code and won’t raise any error.

So is it possible to just leave it that way and continue training? Is it possible when I do that many times?