Where is the actual code for LayerNorm (torch.nn.functional.layer_norm)

I am looking for the implementation for torch.nn.functional.layer_norm, it links me to this doc, which then link me to this one

But I can’t find where is torch.layer_norm.

According to the documentation, it seems like the math is following:


x = torch.randn(50,20,100)
mean = x.sum(axis = 0)/(x.shape[0])
std = (((x - mean)**2).sum()/(x.shape[0])).sqrt()

LayerNorm = torch.nn.LayerNorm(x.shape, elementwise_affine = True)

torch_layernorm = LayerNorm(x)
My_LayerNorm = (x - mean)/std*LayerNorm.weight+LayerNorm.bias

print(My_LayerNorm)
print(torch_layernorm)

However, the my output and LayerNorm output is different…

You can find the (CPU) C++ implementation here.

3 Likes

Hi, @ptrblck , could you tell me where I can find native_layer_norm in the line return std::get<0>(at::native_layer_norm(input, normalized_shape, weight, bias, eps));