NaN in ResNet pretrained BatchNormalization Layers

When I’m using the pretrained ResNet moels provided by Pytorch models, for example resnet50. I find that there are quite a few NaN values in the running_mean and running_var buffer. In this case, I cannot use reset50.eval() since it will output all NaN output.
Is this a problem? How can I fix this?

UPDATE:
This seems to be more of a IDE problem. I used PyCharm. Here is the code I used to detect if there is NaN in the BN layers or not.

def checkBNNaN(model):
    for id, s_module in enumerate(model.modules()):
        if isinstance(s_module, nn.BatchNorm2d):

            if (np.isnan(s_module.running_mean.numpy())).any():
                print "BN # {:d}  running_MEAN has NaN".format(id)

            if (np.isnan(s_module.running_var.numpy())).any():
                print "BN # {:d}  running_Var has NaN".format(id)

The result will be totally different if you run to check a model’s BN running_variables on PyCharm on a linux machine vs. on terminal

UPDATE2:
Updating PyCharm to a new version solved this issue…
Sorry for the spam