How to get batch norm's running stats such as running_var and running_mean in pytorch

Hi friends:
I have a question. Suppose I have a model which contains batch norm layers. Then, due to some tasks’ requirements, I need to get the batch norm layers’ running_var and running_mean at the end of training or evaluation process.
for example, here is a simple code:

    class Net(nn.Module):
        def __init__(self):
            super().__init__()
            self.conv1 = nn.Conv2d(in_channels = in_channels, out_channels = out_channels,
                        kernel_size = 3, padding=1, bias = False)
            self.bn = nn.BatchNorm2d(out_channels)

        def forward(self, x):
            x = self.conv1(x)
            x = self.bn(x)
            return x
    
    my_net = Net()

I want to get my_net’s running var and running_mean of batch norm layers, how should I do? Thank you!!!

You can directly access them via:

my_net.bn.running_mean
my_net.bn.running_var
1 Like

Hello, I encounter a problem. (Pdb) p self.norm.norm.running_var tensor([nan, nan, nan, ..., nan, nan, nan], device='cuda:0') (Pdb) p self.norm.norm.running_mean tensor([nan, nan, nan, ..., nan, nan, nan], device='cuda:0')
Could you please give me some advices to debug this problem?
Thanks very much.

Running stats can be invalid in case an input activation contained invalid values, such as Infs or NaNs.
You could add checks to your code to make sure the input activation to all batchnorm layers contain finite values only and narrow down which layer is causing the invalid activations.