How to avoid rounding in pytorch

I want to implement L2 norm in pytorch, and compare the result with np.linalg.norm. they have different result, how can I fix it?

num = np.linalg.norm(
                    (extracted_feat[str(key)].cpu().detach().numpy() - extracted_feats_aug[str(key)].cpu().detach().numpy()) / np.mean(extracted_feat[str(key)].cpu().detach().numpy()))```
 num = 13982.017 (float32)

`torch.sqrt(torch.sum(((extracted_feat[str(key)] - extracted_feats_aug[str(key)]) / torch.mean(extracted_feat[str(key)]))**2))`
valu: tensor(13989.3135, device='cuda:0')

(((extracted_feat[str(key)] - extracted_feats_aug[str(key)]) / torch.mean((extracted_feat[str(key)])).norm(p=2)
value:  tensor(13989.3135, device='cuda:0')

Hi,

How large are these Tensors?
Could you give a small code sample that we can run locally that reproduces the issue?

Thank you for your response.
extracted_feat[str(key)]: torch.Size([1, 64, 600, 1200])
extracted_feat_aug[str(key)]: torch.Size([1, 64, 600, 1200])
what I am trying to do is subtract the activations of relu layers for clean image and augmented image then divide it by the mean value of clean activations of the same layer, then compute the L2 norm of it.

Is the difference between 13982.017 and 13989.3135 ?
For an accumulation of 46 million entries, I guess this is expected due to floating point imprecisions.
If you want more precision, you can use .double() to get double precision numbers for which the accuracy is greater (from 6 significant digits to 12).