Get tensor out from session, implement, then put it back

Quick question. Do you know how to transfer a cuda.tensor to numpy.ndarray, say the out put from a CNN, then do implementation on the numpy.ndarray, then put it back to tensor, in order to let it in the session, and use autograde?

Short answer: No.

When you convert the tensor to numpy array, you are essentially moving out of pytorch ecosystem and autograd will not have trace of operations on numpy array. Try to make use of torch APIs to perform the operation or write your own pytorch extension to calculate gradients.

1 Like

Hi, Arul. Is it possible that I use something like tensor.clone().cpu().numpy() and torch.from_numpy?
I’m trying to let the tensor and the numpy to share the same memory, so if I change the numpy, the tensor is also changed. It’s just my idea, I don’t know whether it is possible.

Many Thanks

As I said in previous post, autograd doesn’t work with numpy.
If it is possible, try to compose your operation using pure torch APIs.

Understood, I’ll try on that.