How to normalize all feature maps to a range of [0, 1]

I have as an output of a convolutional network a tensor of shape [1, 20, 64, 64]. I want to normalize all feature maps to a range of [0, 1]. I found out, that I can get all the means with means = torch.mean(features, (2, 3)), but I don’t know how to proceed further. Any help?

You could subtract the min. value from the tensor and divide by the max value.
This would make sure that all values are in the range [0, 1].

2 Likes

Thank you @ptrblck, is there an efficient way to to that? I know how to do that with a for loop iterating over all 20 feature map but that seems very ugly.

I do not understand why you need torch.mean to normalize all feature maps to a range of [0, 1].

If you want standardized normalization:

X = torch.randn(1,20,64,64)
X -= X.mean((2,3),keepdim=True)
X /= X.std((2,3),keepdim=True)

If you want them in [0,1]:

X = torch.randn(1,20,64,64)
X -= X.min()
X /= X.max()

Further, if you want to do this for each feature map independently:

X = torch.randn(1,20,64,64)
min_val = X.min(-1)[0].min(-1)[0]
max_val = X.max(-1)[0].max(-1)[0]
X = (X-min_val[:,:,None,None])/(max_val[:,:,None,None]-min_val[:,:,None,None])
2 Likes

Thank you @KaiHoo, one question:

Why is the - min_val[:, :, None, None] within the denominator necessary? I just tested it and it works perfectly, but I don’t understand why. If we don’t want to normalize all feature maps independently we can just use X /= X.max().

Edit: Ahh I understand, it’s because in your first example the max values are determined by the min-subtracted tensor.

What if normalize the feature for a whole dataset? It looks like these operations above only normalize features within the same batch.
Thank you.

1 Like