How to fix the result during inference at models using batchnorm

I faced the same problem as the above link on CIFAR10/ResNet. My model uses Batchnorm, and its update changes the result.
But I want to make my result fixed. How can I do that?


net = ResNet(ResidualBlock, 3, 16, 3, 10).to("cuda")
                net.load_state_dict(torch.load("pretrained/ckpt.pth"), strict=False)
                net.eval()
                
                with torch.no_grad():
                    input = input.to("cuda")
                    output = net(input)
                    _, predicted = output.max(1)
                    print(output)
                    print("%5s" % classes[predicted[0]])

The answer from the linked post explains, that the running statistics in batchnorm layers will be updated during training and used during evaluation (model.eval()).
If you want to keep these stats constant, use model.eval() and don’t perform any forward passes while the model is in training mode.

1 Like

My net was in evaluation mode but I noticed that I didn’t load the optimizer. After loading the optimizer the problem was solved. Thanks!