As I know, we have some layer for normalization. For example, tanh() normalize the input to [-1,1], sigmoid normalizes the input to [0,1]. …
I am looking for a layer (has both backward and forward as tanh) which can normalize the input to a range of zero mean and unit variance. Could you suggest any function in pytorch? Otherwise, how could I write the customer layer to perform it? Thanks
@jmaronas: It looks good. Do you think it will behaivor as nn.tanh() layer? The purpose of the normalization layer is similar tanh layer, just normalize the input to zero mean and unit variance instead of [-1,1]
What do you mean as behavior as tanh layer? There are two difference in this case. Tanh guarantees a constraint output while a zero mean one std do es not. Tanh is a non-linear transformation while zero mean one std is linear.