a-parida12
(Abhijeet Parida)
February 13, 2019, 8:19pm
1
After encoding a embedding using a Fully Convolutional Encoder. I want to carry out channel wise normalisation of the embedding using the L2 norm of that channel and do a pixelwise division for that channel, before i feed it to the decoder. How can I do it?
My embedding is of shape [N,C,H,W]
.
import torch.nn.functional as F
input = F.normalize(input, p=2, dim=1)
I am not clear what is meant by pixelwise division.
1 Like
a-parida12
(Abhijeet Parida)
February 13, 2019, 8:33pm
3
by pixelwise division I meant something, I meant each items of the matrix divided by a scalar value(L2 norm of the channel)
I see. Pl try the code above and verify if thats what you wanted.
a-parida12
(Abhijeet Parida)
February 13, 2019, 8:48pm
5
Thank you @InnovArul that is exactly the effect I wanted.
a-parida12
(Abhijeet Parida)
February 21, 2019, 6:41pm
6
@InnovArul is it possible to rescale the values in each of the channel to lie between [0,1]
ofcourse. you can do min-max normalization. I am not sure if there is any direct API available.
import torch
N, C, H, W = 2, 3, 4, 5
feat_maps = torch.randn(N, C, H, W)
vectorized_feat_maps = feat_maps.view(N, C, -1)
feat_maps_0to1 = ((vectorized_feat_maps - vectorized_feat_maps.min(dim=-1, keepdim=True)[0]) / (vectorized_feat_maps.max(dim=-1, keepdim=True)[0]-vectorized_feat_maps.min(dim=-1, keepdim=True)[0])).view_as(feat_maps)
This might be helpful as well.
I don’t think there currently is an in-build PyTorch function for what you want.
If you really want a specific function for it, instead of simply doing this yourself, “torchsample”: https://github.com/ncullen93/torchsample offers a “RangeNormalize” function where you can just specify min and max range values (tuples per channel: i.e. ((0,0,0),(1,1,1)) or just floats (0, 1)