Changing batch size

Why, when changing batch size, do the weights of neurons change? If I test new data with a batch size equal to the size with which I trained NN, then the results are good. If you change the batch size, the results are bad.

class MyResNet(ResNet):
    def __init__(self):
        super(MyResNet, self).__init__(BasicBlock, [2, 2, 2, 2], num_classes=3)
        self.conv1 = torch.nn.Conv2d(1, 64, 
            kernel_size=(7, 7), 
            stride=(2, 2), 
            padding=(3, 3), bias=False)
...
model.load_state_dict(torch.load('save.pth'))
criterion = nn.CrossEntropyLoss(reduction='sum')
optimizer = torch.optim.AdamW(model.parameters(), lr=learning_rate)
...
outputs = model(x)
loss1 = criterion(outputs, y)
optimizer.zero_grad()
loss1.backward()
optimizer.step()

The weights do not change based on the batch size or the forward pass alone.
Could you explain this issue a bit more?

Make sure to call model.eval() before evaluating your model, as otherwise e.g. the running estimates of batchnorm layers will be updated, which depends on the used batch size.

I don’t know how, but my can teaches samples in batch.
I did not shuffle.
After training, if I change the batch size, then the network can no longer correctly predict.
If I call model.eval () the result is very bad

This might be due to skewed running estimates in your batch norm layer.
Try to use a higher batch size during training or adapt the momentum.