What is the most efficient way to have deterministic data augmentation (i.e. transformations every epoch are random, however they can be reproduced reliably for every data point)?
Currently I am thinking of creating a list with a numpy RandomState object for every datapoint. Even if the DataLoader uses multiple processes, every object is called once per epoch, so every datapoint is subject to the exact same random transformations when e.g. restarting training from scratch (assuming the RandomState objects are reinitialized with the same seed). One RandomState is not enough as there would be multiple processes (num_workers > 0) accessing it and the datapoints are shuffled every epoch.
Is there a more efficient way to do this, considering multiple processes applying a random transformation every epoch to the objects where their order changes due to the shuffling?