I am not sure when I convert a Pytorch tensor into a numpy array, whether the precision of the Pytorch tensor is maintained in the Numpy array. What precision is a standard Pytorch nn layer at? When I use the code below, do I keep the same number of decimals? Even when I set the print options of both Pytorch and Numpy to as high as possible, it seems that the Numpy arrays have lower precision. I then tried to convert the Pytorch tensors to np.float128, which makes it more precise, but still not sure whether I am capturing the full precision of the Pytorch tensors.

Below are the two options I tried. I want to make sure my Numpy CPU array captures at least the same precision as my Pytorch GPU tensor that corresponds to the activation values of neurons in a layer of a FFNN.

First option, that gives Numpy float 32:

```
activationvector.detach().cpu().numpy()
```

Second option, forcing the numpy array to be float 128. It seemed to make a difference whether I used float 64 or float 128, which probably means numpy float 64 didn’t capture full precision, maybe float 128 doesn’t do that either…

```
np.array(activationvector.detach().cpu(),dtype=np.float128)
```

I hope someone can tell me how to make sure my Numpy CPU array captures at least the same precision as my Pytorch GPU tensor

Thanks in advance!