All stochastic elements in NN are off, still getting different results

Hi, my network is giving me different accuracies each time I’m running , before touching anything the difference was around 0.2 top1 scores.
Those are the elements I changed/added:
1.

torch.manual_seed(60)
torch.cuda.manual_seed(60)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
(There is no use in numpy so didn’t add a seed for numpy)

2.No Augmentations:

train_transform=transforms.Compose([
        #transforms.RandomHorizontalFlip(),
        #transforms.RandomCrop(32, 4),
        transforms.ToTensor(),
        normalize,
    ])
  1. No data shuffle.
  2. Saved the model initialization and loaded the same init for each run.

Now the difference is around 0.09.
What did I forget? what other stochastic ingredients do we have?
Thanks.

Make sure to call model.eval() before running the validation loop.
This makes sure to disable dropout layers and use the running stats of batchnorm layers instead of the current batch mean and stddev.

It’s on of course.
Thanks.