Batch normalization - don't see all the parameters

I’ve saved a batch normalization layer (2d) and I only see the weights and bias but not ‘running_mean’, nor the ‘running_var’. I’ve used: torch.save(model.state_dict()…

I may doing something funny. I’m not loading those with torch, but with regular Python code, as a dict. Is there a way to figure those out (‘running_mean’, nor the ‘running_var’) from weight and bias? I don’t see them explicitly in the dictionary.

Why am I not using PyTorch to load? Don’t ask, Kaggle competition limitation.

Okay, it is more involved. I print to the “standard output” and then it goes into a Python file. Probably the issue is not PyTorch related. I’ll have to check why it was not included in the output. I believe the issue as raised above (by me) can be close.

Here is my code that includes the “bug”:

my_state_dict = {
    name: param.tolist()
    for name, param in model.named_parameters() 
}

I’m now adding the running_mean and running_var “manually”.

Running stats are buffers, not parameters, which is why they are not returned via .named_parameters(). Use the state_dict directly or make sure to save .named_buffers() as well.

1 Like