To preserve gradient from .detach() or suggestions to replacement

Hi there, I’ve already notice that ‘detach()’ prevent backward process.

Is there any method to replace the ‘detach().cpu().numpy()’ but, preserve the gradient ?

Operation want to be :

Tensor(GPU) feature → CPU → Numpy(or something) with Gradient(which could backward)

It is okay if there is other choice than Numpy, like; list or whatever which could still preserve the gradient after ‘detach or copy’

Thanks,

No, since detach() explicitly detaches the tensor from the computation graph as indicated by the function name. If you need to use a 3rd party library for some computations (e.g. numpy), you could implement a custom autograd.Function and write the backward pass manually as described here.

1 Like