Which layer perform normalize with zero mean and unit variance of input?

Dear all,

As I know, we have some layer for normalization. For example, tanh() normalize the input to [-1,1], sigmoid normalizes the input to [0,1]. …

I am looking for a layer (has both backward and forward as tanh) which can normalize the input to a range of zero mean and unit variance. Could you suggest any function in pytorch? Otherwise, how could I write the customer layer to perform it? Thanks

Hello,

How about torch.norm()? document

Have you try it before? Is it has both forward and backward functions?

Sure, torch.norm() has backward function like this

x = torch.randn(2,2,requires_grad=True)
y = torch.norm(x)
y.grad_fn
> <NormBackward0 at 0x7f718718d908>
1 Like

It is total wrong solution

import torch
x = torch.randn(2,2,requires_grad=True)
print (x.mean(), x.std())
x= x.norm()
print (x.mean(), x.std())

Output

tensor(0.5027, grad_fn=<MeanBackward0>) tensor(0.8901, grad_fn=<StdBackward0>)
tensor(1.8406, grad_fn=<MeanBackward0>) tensor(nan, grad_fn=<StdBackward0>)
import torch
def normalization(x):
          return (x-x.mean()/x.std())

a=torch.randn(2,2,requires_grad=True)
b=normalization(a)
print(b.mean(),b.std())

If you want forward just create a nn.Module and put this in the forward method. The __init__ can be empty

@jmaronas: It looks good. Do you think it will behaivor as nn.tanh() layer? The purpose of the normalization layer is similar tanh layer, just normalize the input to zero mean and unit variance instead of [-1,1]

What do you mean as behavior as tanh layer? There are two difference in this case. Tanh guarantees a constraint output while a zero mean one std do es not. Tanh is a non-linear transformation while zero mean one std is linear.