I want to train a model for more than 100 epochs, and want to see how the sequences of batch affects the result of model.

As far as I know, using SGD, if the models is initialized with fixed seed and if it is trained with fixed batch sequences, the model is trained in deterministic. So with the batch sequences fixed, I can check how the initialization affects the variation of the model’s performance. And as I want to train the model for 160 epochs, I have to save 160 of different fixed seeds of batches to compare each models’ differences.

I failed to find any pytorch dataloader functions or sources or codes that implemented this. Is it right?

Hi,

The dataloader should be deterministic if the torch and torch.cuda seeds are fixed in your script.

Be careful as well that the computations of the net are not necessarily deterministic, in particular few cudnn operations are not. You will need to set `torch.backends.cudnn.deterministic=True`

but this will have a noticeable impact on speed.

Thanks it was very helpful. I was so confused because i set transforms fixed, initialization fixed, batch fixed and got non-deterministic results, but it is solved

I think, you need to set seed for `random`

library as well. With the following lines you will get deterministic results.

```
torch.manual_seed(randomseed); torch.cuda.manual_seed_all(randomseed); random.seed(randomseed); np.random.seed(randomseed)
torch.backends.cudnn.deterministic=True
```