Normalize vectors to [-1,1] or [0,1]

I am sorry that the question may be easy.
But I can not find the api in pytorch that normalize a vector into a range, such as into [0,1] or [-1,] which is useful for training

for example:
a_i / sqrt(sum(a_i^2))

It seems that there is no a api, so I use :

vec.div_(torch.norm(vec,2))

I don’t think there currently is an in-build PyTorch function for what you want.
If you really want a specific function for it, instead of simply doing this yourself, “torchsample”: https://github.com/ncullen93/torchsample offers a “RangeNormalize” function where you can just specify min and max range values (tuples per channel: i.e. ((0,0,0),(1,1,1)) or just floats (0, 1)

1 Like

Wow, That looks very convience for the fresh pytorcher~

For Normalization btw [0,1]

x = x/x.sum(0).expand_as(x) 
x[torch.isnan(x)]=0   #if an entire column is zero, division by 0 will cause NaNs

For Normalization btw [-1,1]

x = x/x.sum(0).expand_as(x) 
x[torch.isnan(x)]=0   #if an entire column is zero, division by 0 will cause NaNs
x = 2*x - 1

x = x/x.sum(0)

I saw you post this in a few places, but it doesn’t look right - why are you dividing by a sum? And you’re not taking into account negative values. i.e. the above code works only under certain conditions, and while it does make the vector’s max value lesser than 1, it could end up being lesser than 0.0001.

The following code would work correctly for any given 2d vector:

# rescale vectors to a desired range
x = torch.tensor([2, -1.0, 5, 6, 7])
if not all(x == 0): # non all-zero vector
    # linear rescale to range [0, 1]
    x -= x.min() # bring the lower range to 0
    x /= x.max() # bring the upper range to 1
    x # tensor([0.3750, 0.0000, 0.7500, 0.8750, 1.0000])
    # linear rescale to range [-1, 1]
    x = 2*x - 1
    x # tensor([-0.2500, -1.0000,  0.5000,  0.7500,  1.0000])
2 Likes

You are right, mine doesn’t look right. I was using this for a specific use-case. Your snippet is the one to follow. Thank you for pointing this out! Cheers!