# L2 normalisation via f.normalize dim variable

I am quite new to pytorch and I am looking to apply L2 normalisation to two types of tensors, but I am npot totally sure what I am doing is correct:

. type 1 (in the forward function) has shape `torch.Size([2, 128])` and I would like to normalise each tensor (L2 norm).

for this case, I do:
`F.normalize(tensor_variable, p=2, dim=1)`
Is this the correct way to do it? is there any check I can perform to know that the vectors have been L2 normed?

 type2 has shape `()`: for this I just do:
`F.normalize(tensor_variable, p=2, dim=0)`
Is this the correct way to go about it?

I am not able to find the doc link to this `F.normalize` function and I am having to take a guess at the dimension Hi John!

Yes, your use of `normalize()` is correct.

Just compute the (square of the) norm:

``````(torch.nn.functional.normalize (tensor_variable, p = 2, dim = 1)**2).sum (dim = 1)
``````

and check that you get 1 for each row of `tensor_input`.

You can find the documentation here: normalize.

Best.

K. Frank

3 Likes

Hi Frank,

FIRSTLY, thank you very much for your reply. Your answers certainly helped me. I have one follow up question: `torch.nn.functional.normalize` and `F.Normalize` are presumably the same function?

thank you.

Hi John!

First I think you have a minor typo – I think you mean `F.normalize`
with a lower-case `n`.

There is no `F.normalize`, strictly speaking, in pytorch. Rather, it is
common practice to:

``````import torch.nn.functional as F
``````

so that you can save a little typing and write `F.normalize` instead
of `torch.nn.functional.normalize`. So, yes, they are the same
function.

(There is a `torchvision.transforms.Normalize` – with an upper-case
`N`, and a class, rather than a function – but that’s not what you’re talking about.)

Best.

K. Frank

Thank you again Frank for the patient explanation. REALLY appreciate your time.