After encoding a embedding using a Fully Convolutional Encoder. I want to carry out channel wise normalisation of the embedding using the L2 norm of that channel and do a pixelwise division for that channel, before i feed it to the decoder. How can I do it?
My embedding is of shape
import torch.nn.functional as F
input = F.normalize(input, p=2, dim=1)
I am not clear what is meant by pixelwise division.
by pixelwise division I meant something, I meant each items of the matrix divided by a scalar value(L2 norm of the channel)
I see. Pl try the code above and verify if thats what you wanted.
@InnovArul that is exactly the effect I wanted.
@InnovArul is it possible to rescale the values in each of the channel to lie between [0,1]
ofcourse. you can do min-max normalization. I am not sure if there is any direct API available.
N, C, H, W = 2, 3, 4, 5
feat_maps = torch.randn(N, C, H, W)
vectorized_feat_maps = feat_maps.view(N, C, -1)
feat_maps_0to1 = ((vectorized_feat_maps - vectorized_feat_maps.min(dim=-1, keepdim=True)) / (vectorized_feat_maps.max(dim=-1, keepdim=True)-vectorized_feat_maps.min(dim=-1, keepdim=True))).view_as(feat_maps)
This might be helpful as well.
I don’t think there currently is an in-build PyTorch function for what you want.
If you really want a specific function for it, instead of simply doing this yourself, “torchsample”:
https://github.com/ncullen93/torchsample offers a “RangeNormalize” function where you can just specify min and max range values (tuples per channel: i.e. ((0,0,0),(1,1,1)) or just floats (0, 1)