What exactly does F.batch_norm do?

I could not find docs for this function.
Could someone elebrate more about it? Thanks

3 Likes

It’s calling torch._C._functions.BatchNorm
And I think you can find the underlying code here: https://github.com/pytorch/pytorch/blob/master/torch/csrc/autograd/functions/batch_normalization.cpp

If you are just trying to understand more generally what is batch normalisation, there is a doc for modules nn.BatchNorm1d/2d/3d: http://pytorch.org/docs/master/nn.html#torch.nn.BatchNorm1d

1 Like

Thank you for your reply. I am not clear about how it compute instance normalization. From the source code, it seems that it calls the F.batch_norm. But if the batch size is 1 like in the following case, will it be probamatic for batch_norm?

# Apply instance norm
input_reshaped = input.contiguous().view(1, b * c, *input.size()[2:])
out = F.batch_norm(
            input_reshaped, running_mean, running_var, weight, bias,
            True, self.momentum, self.eps)

Also, does it compute each individual mean for each feature map (required for instance norm) in F.batch_norm?

1 Like

I am also interested in understanding how that layer works and what part of it updates the variances and means. Did you find out?

3 Likes