What happens to precision of Pytorch tensors from GPU when converted to Numpy array CPU?

I am not sure when I convert a Pytorch tensor into a numpy array, whether the precision of the Pytorch tensor is maintained in the Numpy array. What precision is a standard Pytorch nn layer at? When I use the code below, do I keep the same number of decimals? Even when I set the print options of both Pytorch and Numpy to as high as possible, it seems that the Numpy arrays have lower precision. I then tried to convert the Pytorch tensors to np.float128, which makes it more precise, but still not sure whether I am capturing the full precision of the Pytorch tensors.

Below are the two options I tried. I want to make sure my Numpy CPU array captures at least the same precision as my Pytorch GPU tensor that corresponds to the activation values of neurons in a layer of a FFNN.

First option, that gives Numpy float 32:


Second option, forcing the numpy array to be float 128. It seemed to make a difference whether I used float 64 or float 128, which probably means numpy float 64 didn’t capture full precision, maybe float 128 doesn’t do that either…


I hope someone can tell me how to make sure my Numpy CPU array captures at least the same precision as my Pytorch GPU tensor

Thanks in advance!

The numpy array and PyTorch tensor should be bitwise accurate, as they are sharing the same underlying data as seen here:

x = torch.tensor(1/3)


y = x.numpy()
y += 0.1


Upcasting the data should not make the result more precise, as the original value was already represented in the lower precision.

Thanks a lot for the clarification!