Why random functions return same values during the training based on Ignite?

I use Ignite for training my model.
np.random.randint, torch.randint… called from @trainer.on(Events.EPOCH_COMPLETED(every=1)) return the same values.
It looks like a seed is hardcoded. Why do they have it and how to cancel it?

CC @vfdev-5, who is the main author of the library.

@odats this is done to have reproducble trainings = user can resume the training from a checkpoint (based on epoch or iteration) and have “almost” the same training behaviour. Please, see this : https://pytorch.org/ignite/concepts.html#resume-training

To alter the seed, it is possible to specify the seed as the argument in the run:

trainer.run(dataloader, seed=1234)

That being said we think to change this behaviour and master/nightly releases already setup the seed using torch default random generator : https://github.com/pytorch/ignite/pull/799

Hope this helps. Feel free to ask other questions.

PS. @ptrblck thanks for the notification ! I should have missed the email somehow…

1 Like

I have some randomness in Dataset get item. I do random shuffling of values. It looks like seed is not hardcoded there and I get true random data for each training step. Can you please confirm?

I have some randomness in Dataset get item. I do random shuffling of values. It looks like seed is not hardcoded there and I get true random data for each training step. Can you please confirm?

Sorry, I do not get what you would like me to confirm.
On ignite’s side, seed is used to make random state “synchronization” at each dataloader restart (it corresponds in most of the cases to the epoch size). In this way, for a given iteration/epoch the dataflow can be the same (for a given seed). More precisely it is something like

for e in range(num_epochs):
    set_seed(seed + e)
    do_single_epoch_iterations(dataloader)

Hope this answers your question

I mean in my custom dataset I have a code:

class MyDataset(Dataset):
   def __getitem__(self, idx):
      item = self.dataset.iloc[idx]
      new_item = np.random.permutation(item)
      return new_item

By default each time it returns random seed. My question is: will my dataset, for each epoch, return new randomly permuted examples?

I have found some information:
However, while resuming from iteration, random data augmentations are not synchronized in the middle of the epoch and thus batches remaining until the end of en epoch can effectively be different of those from the initial run.

IMO, there is no problems with your getitem’s permutations, they all will be generated randomly as supposed.

By default each time it returns random seed.

new_item = np.random.permutation(item) it returns a random value not random state’s seed, right ?

I have found some information:
However, while resuming from iteration, random data augmentations are not synchronized in the middle of the epoch and thus batches remaining until the end of en epoch can effectively be different of those from the initial run.

This info is about when you try to resume/restart the training from a checkpoint that saved trainer’s state in the middle of epoch (e.g. I store checkpoints every 1000 iterations and my epoch is 5432 iterations).

Thank you for clarification. Everything is clear now. It would be nice to have some documentation about this behaviour.

1 Like

I agree we need to better update the documentation.