Hi there, I’ve already notice that ‘detach()’ prevent backward process.
Is there any method to replace the ‘detach().cpu().numpy()’ but, preserve the gradient ?
Operation want to be :
Tensor(GPU) feature → CPU → Numpy(or something) with Gradient(which could backward)
It is okay if there is other choice than Numpy, like; list or whatever which could still preserve the gradient after ‘detach or copy’