'BatchNorm2d' object has no attribute 'track_running_stats'

Sorry, I am using a new computer and I just installed the newest version of Pytorch.
My code runs perfectly on my old computer.
But now BatchNorm2d is giving me this error when the model is switched to evaluation mode.

‘BatchNorm2d’ object has no attribute ‘track_running_stats’

Does anyone know how to solve this?
I saw the source code change warning, but I am not sure about how to solve it.
My model is based on pre-trained ResNet101

Could you print your torch.__version__?
Also could you post a small code snippet reproducing the error?

Hi, I solved the problem by installing pytorch 0.3.1
Here is my code:

#My pretrained model” is ResNet101 + 1 Conv2d. It was trained on my old computer with pytorch 0.3.1.

net.load_state_dict(torch.load("#My pretrained model"))
net.cuda()
val_set = Voc2012Segmentation('val', transform=None,
                              target_transform=None)
test_loader = torch.utils.data.DataLoader(val_set, batch_size=4,
                                           shuffle=False, num_workers=0)

dataiter = iter(test_loader)
net.eval()
for img_num in range(20):

    image, target = dataiter.next()
    image = Variable(image.cuda())
    target = Variable(target.cuda())
    heatmap = net(image)

problem got solved? I met the same problem. If solved, could you give some hint?

1 Like

As I said in the previous post, I solved the problem by installing a previous version of pytorch. 0.3.1.
Hope it helps :slight_smile:

Encountered same problem. Upgraded to V0.4 and rebuilt model. It required some minor re-working of the code. The model now works without error using torch.version 0.4.0

Can you elaborate on the minor changes you had to make? Also, by rebuilding the model, do you mean retraining? @Leif_Ulstrup

@C0rine, apologies for the slow response. The only issue I had was a previously working function that computed accuracy and sensitivity values had to be updated. The V0.4.0 version of PyTorch required me to explicitly cast the tensor to a float to compute the value. It may have been ‘operator error’ on my part not following a guideline but it worked in V0.3. After figuring that out, I was able to ‘retrain’ the model and use it in a separate production prediction microservice. I want to do more study on the differences in the model predictions between the two versions. I noticed some differences in prediction computations but have not had a chance to do a rigorous analysis. Good luck.

Thanks for the elaboration!

I eventually also decided to just roll back to v0.3.1 to bypass the issue as I did not have the resources to retrain the model.

I have the same issue :frowning: It is seems that it will be solved only in next version of pytorch

You can try to load the parameters instead of the model.

I am also facing the same problem and would want to roll back to previous version i.e 0.3.1. How can I do that using pip. Can you please share you method of downgrading the pytorch version?

Dear Ayush,
This might help.

Could I downgrade pytorch? or should I do something more after upgrading?

I used conda but I think the commands should be similar

Is there any update to this? Still having the same issue.
I didn’t get it. What is it related to?

Its related to new version of pytorch.
Try version 0.3 instead of 0.4

Your model train by pytorch0.3.x, but run in pytorch > 0.4.0.
Change the parameter of BatchNorm2d by yourself.
For example, define the function

def recursion_change_bn(module):
    if isinstance(module, torch.nn.BatchNorm2d):
        module.track_running_stats = 1
    else:
        for i, (name, module1) in enumerate(module._modules.items()):
            module1 = recursion_change_bn(module1)
    return module

and
use it when you load model

check_point = torch.load(check_point_file_path)
model = check_point['net']
for i, (name, module) in enumerate(model._modules.items()):
    module = recursion_change_bn(model)
model.eval()

I have ran 0.3.1 model in pytorch0.4.1. and pytorch1.0.0.

4 Likes

Thank you for your excellent work. Your approach is excellent!