Suggestion: Moving to numpy automatically moves cpu

Currently a PyTorch Tensor on the GPU has to be moved to the CPU before one can call Tensor.numpy(). Is there any reason why a call to .numpy() shouldn’t automatically move the Tensor to the CPU beforehand?

1 Like

It is better to keep the move explicit rather than implicit. Implicit moves will cause mysterious bugs that will make developers scratch their head.

1 Like

In general I agree with you, but since there is no NumPy GPU arrays or anything else calling Tensor.numpy() always implies a conversion to CPU. Or can you think of any case where this is not the case?

Now the __array__ and __wrap_array__ interface are implemented in PyTorch, so you can pass Tensors directly to numpy functions and that should work out of the box.


This is even better! I am already looking forward to this feature.