jozhang97
(Jeffrey O Zhang)
January 14, 2019, 4:36pm
1
It seems to me that np.linalg.norm and torch.norm have different outputs.
torch version: 1.0.0
import torch
import numpy as np
var_torch = torch.randn((3,333,333))
var_numpy = var_torch.detach().numpy()
print(torch.norm(var_torch))
>>> tensor(1884.0406)
print(np.linalg.norm(var_numpy))
>>> 577.75214
Is this expected?
vmirly1
(Vahid Mirjalili)
January 14, 2019, 4:50pm
2
I think they handle the dimensions differently. If I specify the order of dimensions, then I can get the same results using NumPy and PyTorch, so first taking the norm of dimensions 1, 2 and then the norm of the resulting tensor/array:
>>> np.linalg.norm(np.linalg.norm(var_numpy, axis=(1,2)))
577.3682
>>> torch.norm(torch.norm(var_torch, dim=(1,2)))
tensor(577.3682)
This result is consistent with what NumPy gives without specifying dimensions. However, I don’t know how PyTorch calculates its result tensor(1883.6027)
:
torch.norm(var_torch)
tensor(1883.6027)
@smth @ptrblck Any comment?
This issue might be related to this bug .
Soumith opened an issue here to track it.
Just to double check if these issues are related: Do you get the correct result on GPU?
vmirly1
(Vahid Mirjalili)
January 14, 2019, 4:58pm
4
Thanks @ptrblck . I get correct results on GPU:
>>> torch.norm(var_torch.to(device))
tensor(577.3682, device='cuda:0')
1 Like
Thanks for testing it @vmirly1 (as I currently don’t have access to a GPU ).
In that case it looks like they are related.
As Soumith said, it’s a high priority bug and should be fixed before the next minor release.
1 Like
jozhang97
(Jeffrey O Zhang)
January 14, 2019, 5:29pm
6
Ahh I see the duality with the previous thread.
Thanks so much.
Junhao_Wen
(Junhao Wen)
December 21, 2019, 10:16pm
7
@ptrblck
Sorry to reopen this issue, I found that np.linalg.norm() and torch.norm give similar (I say similar is because the results have different decimal points) results for Frobenius norm, but for 2-norm, the results are more different:
Here is the code for reproduce:
import torch
from scipy.linalg import norm
import numpy as np
a = np.arange(9) - 4.0
a = a.reshape((3, 3))
test1 = np.linalg.norm(a)
7.745966692414834
test2 = torch.norm(torch.from_numpy(a).cuda())
tensor(7.7460, device=‘cuda:0’, dtype=torch.float64)
test1 = np.linalg.norm(a, ord=2)
7.3484692283495345
test2 = torch.norm(torch.from_numpy(a).cuda(), p=2)
tensor(7.7460, device=‘cuda:0’, dtype=torch.float64)
Do you think this is normal? I’m using Pytorch 1.2.0 on Ubuntu 18.04. I need to obtain exactly the same results for 2-norm.
Hao