# Ǹorm of a tensor

Hi all,

I have a `64x10x3x32x32` tensor `g` where the first coordinate is the batch_size. For every `10x3x32x32` subtensor I would like to compute the 2-norm and then sum them up. One way do it is like this

``````    sum_2_normes = torch.zeros(1)
......
for i in range(batch_size):
sum_2_normes += g[i].norm(p=2)
``````

I was wondering if there is a way to do it in one line and/or more efficiently. I tried `torch.norm(g, p=2, dim=0)` but, as expected according to the documentation I get a `10x3x32x32` tensor, instead of what I need which is a `64x1` tensor

You could flatten the vector and calculate the norm on its view.

``````a = torch.randn(64, 10, 3, 32, 32)
a = a.view(64, -1)
b = torch.norm(a, p=2, dim=1)
torch.sum(b)
``````
9 Likes

Thanks, that works. Now I realized that I was thinking about `dim` as an numpy `axes`, which is wrong.

1 Like

Well, you can think of `dim` as a numpy `axis`. Could you elaborate what you expected?
Numpy will behave in the same way for this problem:

``````x = np.random.randn(64, 10, 3, 32, 32)
y = np.linalg.norm(x, ord=2, axis=1)
print(y.shape)``````
2 Likes

@ptrblck I feel confused about matrix norm and vector norm. I kknow vector norm and matrix norm has different formulation. So how to tell `torch.norm()` to use which kind of nomr?

Have a look at the torch.norm docs to see all possible flags for `p` and which norm will be calculated for matrices and vectors.

why is there a accept on the word norm here? U know it makes it harder to search right?

Hi, is the solution you proposed more effective than the “for loop” solution, due to, e.g., parallel computation (in theory computing the L2 norm for one batch is independent from another batch)?

Thanks a lot!

Python for loops might be often slower than a single call into a specific function, as it could use vectorized code under the hood.
You could profile both approaches for your current workloads and chose the faster one.

For the norm calculation I would assume that avoiding the loop will yield a better performance.