Implement custom LayerNormalization layer for channel-wise normalization

Hi, currently pytorch supports LayerNorm operation with normalized_shape in the form [∗×normalized_shape[0]×normalized_shape[1]×…×normalized_shape[−1]]. However, I want to implement a custom LayerNorm operations that only normalize across channels. For examples, given input size [N, C, W, H], the custom layer would normalize across C, but not W or H. I tried to look for the source code of LayerNorm implementation and then tried to modify it but couldn’t find the source code. Can anybody point me to the location of the source code? Also, I’m looking at this tutorial Extending PyTorch — PyTorch master documentation to try to write the custom layer. It’d also be great if someone can show me necessary steps to build the described custom layer, or some relevant tutorials. Thank you very much!

1 Like