# Output of torch.norm(x) depends on size of x, not in a good way?

I am confused: can you help me understand why does torch.norm(x) depend on the size of x? Thanks!

``````N=10000
x=torch.ones(N)
n1=torch.norm(x)
n2=numpy.sqrt(N)
``````
``````[output]: n1=tensor(100.), n2=100.
``````
``````N=40000
x=torch.ones(N)
n1=torch.norm(x)
n2=numpy.sqrt(N)
``````
``````[output]: n1=tensor(266.0605) n2=200.
``````

We calculate the Vector 2-norm by default. And that is a function of the total sum, by definition:

http://mathworld.wolfram.com/L2-Norm.html

However, once you hit floating point precision limits, you get â€śinaccurateâ€ť resultsâ€¦

Iâ€™m taking a look at what these accuracy limits are, wrt floating point, and if that might be affecting things.

I know that, but for x=torch.ones(40000), the answer should be 200, not 266.0605 as I get with torch.norm(x)

Iâ€™m sorry, I posted my answer as I was typing it, by mistake.

I looked into this further, it looks like a bug that we introduced in 1.0.0, Iâ€™m taking a further look and filing an issue.

It is only affected on the CPU, and produces the correct result on GPU.

Got it, I think gpu is affected as well, though.

I checked the GPU implementation via:

``````>>> import torch
>>> x = torch.ones(40000)
>>> torch.norm(x)
tensor(266.0605)
>>> torch.norm(x.to(dtype=torch.float32, device='cuda'))
tensor(200., device='cuda:0')
``````

Seems to work fine.
Are you seeing incorrect result on GPU as well?

Double checked it â€“ no, you are right, GPU result is correct. Thanks!

I filed an issue at https://github.com/pytorch/pytorch/issues/15602
It will for sure be fixed in our next minor release on Jan 15th, and will be fixed in our nightlies much sooner than that, Iâ€™m having it looked at with high priority.

Really sorry for the bug!

1 Like

just fyi, this is fixed now via https://github.com/pytorch/pytorch/pull/15885 and will go into the 1.0.1 release in the time window of < 1 week from now.

1 Like