What happens to precision of Pytorch tensors from GPU when converted to Numpy array CPU?

I am not sure when I convert a Pytorch tensor into a numpy array, whether the precision of the Pytorch tensor is maintained in the Numpy array. What precision is a standard Pytorch nn layer at? When I use the code below, do I keep the same number of decimals? Even when I set the print options of both Pytorch and Numpy to as high as possible, it seems that the Numpy arrays have lower precision. I then tried to convert the Pytorch tensors to np.float128, which makes it more precise, but still not sure whether I am capturing the full precision of the Pytorch tensors.

Below are the two options I tried. I want to make sure my Numpy CPU array captures at least the same precision as my Pytorch GPU tensor that corresponds to the activation values of neurons in a layer of a FFNN.

First option, that gives Numpy float 32:

activationvector.detach().cpu().numpy()

Second option, forcing the numpy array to be float 128. It seemed to make a difference whether I used float 64 or float 128, which probably means numpy float 64 didn’t capture full precision, maybe float 128 doesn’t do that either…

np.array(activationvector.detach().cpu(),dtype=np.float128)

I hope someone can tell me how to make sure my Numpy CPU array captures at least the same precision as my Pytorch GPU tensor

Thanks in advance!

The numpy array and PyTorch tensor should be bitwise accurate, as they are sharing the same underlying data as seen here:

x = torch.tensor(1/3)

print('{:.20f}'.format(x))
print('{:.20f}'.format(x.numpy()))
print('{:.20f}'.format(x.numpy().astype(np.float64)))

y = x.numpy()
y += 0.1

print('{:.20f}'.format(x))
print('{:.20f}'.format(x.numpy()))
print('{:.20f}'.format(x.numpy().astype(np.float64)))

Upcasting the data should not make the result more precise, as the original value was already represented in the lower precision.

Thanks a lot for the clarification!