How to normalize a tensor inplace?

Given a embedding layer initialised as below:
embedding_layer = nn.EmbeddingBag(subvocab_size, embedding_dim, mode="mean", sparse=True, max_norm=1)
The subvocab_size is 10M and embedding_dim is 100

If I normalize like below:
embedding_layer.weight.data = F.normalize(embedding_layer.weight.data, p=2,dim=1)
Code throws gpu out of memory error.

I’d like to normalize the embeddings every n batches.

Assuming the normalization needs additional memory you might be able to normalize the embedding before pushing it to the device.

I want to normalize it every n batches.
PS: Just updated the question

Make sure to not use .data as it’s deprecated. You use .copy_() to update your weight instead